Hi Ryan,
On 3/10/25 10:04 PM, Ryan Roberts wrote:
It is best practice for all pte accesses to go via the arch helpers, to
ensure non-torn values and to allow the arch to intervene where needed
(contpte for arm64 for example). While in this case it was probably safe
to directly dereference, let's tidy it up for consistency.
Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx>
---
mm/migrate.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
This looks good to me. So
Reviewed-by: Qi Zheng <zhengqi.arch@xxxxxxxxxxxxx>
BTW, there are many other places in the kernel that directly
dereference pmd_t* and pud_t*, etc.
For example:
root@debian:~# grep "*vmf->pmd" . -rwn
./mm/memory.c:5113: if (pmd_none(*vmf->pmd) && !vmf->prealloc_pte) {
./mm/memory.c:5207: if (unlikely(!pmd_none(*vmf->pmd)))
./mm/memory.c:5339: if (pmd_none(*vmf->pmd)) {
./mm/memory.c:5490: if (pmd_none(*vmf->pmd)) {
./mm/memory.c:5996: if (unlikely(pmd_none(*vmf->pmd))) {
./mm/filemap.c:3612: if (pmd_trans_huge(*vmf->pmd)) {
./mm/filemap.c:3618: if (pmd_none(*vmf->pmd) &&
folio_test_pmd_mappable(folio)) {
./mm/filemap.c:3628: if (pmd_none(*vmf->pmd) && vmf->prealloc_pte)
./mm/huge_memory.c:1237: if (unlikely(!pmd_none(*vmf->pmd))) {
./mm/huge_memory.c:1352: if (pmd_none(*vmf->pmd)) {
./mm/huge_memory.c:1496: if (pmd_none(*vmf->pmd)) {
./mm/huge_memory.c:1882: if (unlikely(!pmd_same(*vmf->pmd, vmf->orig_pmd)))
./mm/huge_memory.c:1947: if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) {
./mm/huge_memory.c:1965: if (unlikely(!pmd_same(*vmf->pmd, orig_pmd))) {
./fs/dax.c:1935: if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) {
./fs/dax.c:2058: if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) &&
./fs/dax.c:2059: !pmd_devmap(*vmf->pmd)) {
Would it be best to clean them up as well?
Thanks,
Qi
diff --git a/mm/migrate.c b/mm/migrate.c
index 22e270f727ed..33a22c2d6b20 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -202,7 +202,7 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
return false;
VM_BUG_ON_PAGE(!PageAnon(page), page);
VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE(pte_present(*pvmw->pte), page);
+ VM_BUG_ON_PAGE(pte_present(ptep_get(pvmw->pte)), page);
if (folio_test_mlocked(folio) || (pvmw->vma->vm_flags & VM_LOCKED) ||
mm_forbids_zeropage(pvmw->vma->vm_mm))
--
2.43.0