On Wed, Jul 17, 2013 at 01:00:53PM +0200, Peter Zijlstra wrote: > On Mon, Jul 15, 2013 at 04:20:06PM +0100, Mel Gorman wrote: > > The zero page is not replicated between nodes and is often shared > > between processes. The data is read-only and likely to be cached in > > local CPUs if heavily accessed meaning that the remote memory access > > cost is less of a concern. This patch stops accounting for numa hinting > > faults on the zero page in both terms of counting faults and scheduling > > tasks on nodes. > > > > Signed-off-by: Mel Gorman <mgorman@xxxxxxx> > > --- > > mm/huge_memory.c | 9 +++++++++ > > mm/memory.c | 7 ++++++- > > 2 files changed, 15 insertions(+), 1 deletion(-) > > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > > index e4a79fa..ec938ed 100644 > > --- a/mm/huge_memory.c > > +++ b/mm/huge_memory.c > > @@ -1302,6 +1302,15 @@ int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, > > > > page = pmd_page(pmd); > > get_page(page); > > + > > + /* > > + * Do not account for faults against the huge zero page. The read-only > > + * data is likely to be read-cached on the local CPUs and it is less > > + * useful to know about local versus remote hits on the zero page. > > + */ > > + if (is_huge_zero_pfn(page_to_pfn(page))) > > + goto clear_pmdnuma; > > + > > src_nid = numa_node_id(); > > count_vm_numa_event(NUMA_HINT_FAULTS); > > if (src_nid == page_to_nid(page)) > > And because of: > > 5918d10 thp: fix huge zero page logic for page with pfn == 0 > Yes. Thanks. -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>