On Tue, 31 Mar 2020 16:56:04 +0800 "Huang, Ying" <ying.huang@xxxxxxxxx> wrote: > From: Huang Ying <ying.huang@xxxxxxxxx> > > Now, when read /proc/PID/smaps, the PMD migration entry in page table is simply > ignored. To improve the accuracy of /proc/PID/smaps, its parsing and processing > is added. It would be helpful to show the before-and-after output in the changelog. > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -548,8 +548,17 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, > bool locked = !!(vma->vm_flags & VM_LOCKED); > struct page *page; > > - /* FOLL_DUMP will return -EFAULT on huge zero page */ > - page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP); > + if (pmd_present(*pmd)) { > + /* FOLL_DUMP will return -EFAULT on huge zero page */ > + page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP); > + } else if (unlikely(is_swap_pmd(*pmd))) { > + swp_entry_t entry = pmd_to_swp_entry(*pmd); > + > + VM_BUG_ON(!is_migration_entry(entry)); I don't think this justifies nuking the kernel. A WARN()-and-recover would be better. > + page = migration_entry_to_page(entry); > + } else { > + return; > + } > if (IS_ERR_OR_NULL(page)) > return; > if (PageAnon(page)) > @@ -578,8 +587,7 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, > > ptl = pmd_trans_huge_lock(pmd, vma); > if (ptl) { > - if (pmd_present(*pmd)) > - smaps_pmd_entry(pmd, addr, walk); > + smaps_pmd_entry(pmd, addr, walk); > spin_unlock(ptl); > goto out; > }