On Mon, Jun 02, 2014 at 04:49:18PM -0700, Dave Hansen wrote: > On 02/10/2014 01:44 PM, Naoya Horiguchi wrote: > > When we try to use multiple callbacks in different levels, skip control is > > also important. For example we have thp enabled in normal configuration, and > > we are interested in doing some work for a thp. But sometimes we want to > > split it and handle as normal pages, and in another time user would handle > > both at pmd level and pte level. > > What we need is that when we've done pmd_entry() we want to decide whether > > to go down to pte level handling based on the pmd_entry()'s result. So this > > patch introduces a skip control flag in mm_walk. > > We can't use the returned value for this purpose, because we already > > defined the meaning of whole range of returned values (>0 is to terminate > > page table walk in caller's specific manner, =0 is to continue to walk, > > and <0 is to abort the walk in the general manner.) > > This seems a bit complicated for a case which doesn't exist in practice > in the kernel today. We don't even *have* a single ->pte_entry handler. Following users have their own pte_entry() by latter part of this patchset: - queue_pages_range() - mem_cgroup_count_precharge() - show_numa_map() - pagemap_read() - clear_refs_write() - show_smap() - or1k_dma_alloc() - or1k_dma_free() - subpage_mark_vma_nohuge > Everybody just sets ->pmd_entry and does the splitting and handling of > individual pte entries in there. Walking over every pte entry under some pmd is common task, so if you don't have any good reason, we should do it in mm/pagewalk.c side, not in each pmd_entry() callback. (Callbacks should focus on their own task.) > The only reason it's needed is because > of the later patches in the series, which is kinda goofy. Most of current users use pte_entry() in the latest linux-mm. Only few callers (mem_cgroup_move_charge() and force_swapin_readahead()) make their pmd_entry() handle pte-level walk in their own way. BTW, we have some potential callers of page table walker which currently does page walk completely in their own way. Here's the list: - mincore() - copy_page_range() - remap_pfn_range() - zap_page_range() - free_pgtables() - vmap_page_range_noflush() - change_protection_range() Yes, my work for cleanuping page table walker is still on the way. > I'm biased, but I think the abstraction here is done in the wrong place. > > Naoya, could you take a looked at the new handler I proposed? Would > that help make this simpler? I'll look through this series later and I'd like to add some of your patches on top of this patchset. Thanks, Naoya Horiguchi -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>