On Wed, Aug 31, 2016 at 05:17:30PM +0200, Michal Hocko wrote: > On Wed 31-08-16 16:04:57, James Morse wrote: > > Trying to walk all of virtual memory requires architecture specific > > knowledge. On x86_64, addresses must be sign extended from bit 48, > > whereas on arm64 the top VA_BITS of address space have their own set > > of page tables. > > > > mem_cgroup_count_precharge() and mem_cgroup_move_charge() both call > > walk_page_range() on the range 0 to ~0UL, neither provide a pte_hole > > callback, which causes the current implementation to skip non-vma regions. > > > > As this call only expects to walk user address space, make it walk > > 0 to 'highest_vm_end'. > > > > Signed-off-by: James Morse <james.morse@xxxxxxx> > > Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> > > --- > > This is in preparation for a RFC series that allows walk_page_range() to > > walk kernel page tables too. > > OK, so do I get it right that this is only needed with that change? > Because AFAICS walk_page_range will be bound to the last vma->vm_end > right now. I think this is correct, find_vma() in walk_page_range() does that. > If this is the case this should be mentioned in the changelog > because the above might confuse somebody to think this is a bug fix. > > Other than that this seams reasonable to me. I'm fine with this change. Acked-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href