On Mon, Jun 09, 2014 at 01:04:08PM -0700, Dave Hansen wrote: > On 06/06/2014 03:58 PM, Naoya Horiguchi wrote: > > @@ -6723,14 +6723,9 @@ static int mem_cgroup_count_precharge_pmd(pmd_t *pmd, > > struct mm_walk *walk) > > { > > struct vm_area_struct *vma = walk->vma; > > - spinlock_t *ptl; > > > > - if (pmd_trans_huge_lock(pmd, vma, &ptl) == 1) { > > - if (get_mctgt_type_thp(vma, addr, *pmd, NULL) == MC_TARGET_PAGE) > > - mc.precharge += HPAGE_PMD_NR; > > - spin_unlock(ptl); > > - } else > > - skip->control = PTWALK_DOWN; > > + if (get_mctgt_type_thp(vma, addr, *pmd, NULL) == MC_TARGET_PAGE) > > + mc.precharge += HPAGE_PMD_NR; > > return 0; > > } > > I guess my series did two things: > 1. move page table walking to the walk_page_range() code > 2. make new walk handler that can take arbitrarily-sizes ptes > > This does (1) quite nicely and has some nice code savings. I still > think (2) has some value, and like my approach, but this is definitely a > step in the right direction. Thank you. And yes, I'm planning to add (2) to this series in later version. Naoya Horiguchi -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>