Hello, sorry for the long delay. > In the specific case of show_numa_map(), the custom page table walking > logic implemented in mempolicy.c does not provide any special service > beyond that provided by walk_page_range(). > > Also, converting show_numa_map() to use the generic routine decouples > the function from mempolicy.c, allowing it to be moved out of the mm > subsystem and into fs/proc. > > Signed-off-by: Stephen Wilson <wilsons@xxxxxxxx> > --- > mm/mempolicy.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++------- > 1 files changed, 46 insertions(+), 7 deletions(-) > > diff --git a/mm/mempolicy.c b/mm/mempolicy.c > index 5bfb03e..dfe27e3 100644 > --- a/mm/mempolicy.c > +++ b/mm/mempolicy.c > @@ -2568,6 +2568,22 @@ static void gather_stats(struct page *page, void *private, int pte_dirty) > md->node[page_to_nid(page)]++; > } > > +static int gather_pte_stats(pte_t *pte, unsigned long addr, > + unsigned long pte_size, struct mm_walk *walk) > +{ > + struct page *page; > + > + if (pte_none(*pte)) > + return 0; > + > + page = pte_page(*pte); > + if (!page) > + return 0; original check_pte_range() has following logic. orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); do { struct page *page; int nid; if (!pte_present(*pte)) continue; page = vm_normal_page(vma, addr, *pte); if (!page) continue; /* * vm_normal_page() filters out zero pages, but there might * still be PageReserved pages to skip, perhaps in a VDSO. * And we cannot move PageKsm pages sensibly or safely yet. */ if (PageReserved(page) || PageKsm(page)) continue; gather_stats(page, private, pte_dirty(*pte)); Why did you drop a lot of check? Is it safe? Other parts looks good to me. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>