Architecture might support fake node when CONFIG_NUMA is enabled but any node settings were supported by ACPI or device tree. In this case, getting memory policy during memory allocation path is meaningless. Moreover, performance degradation was observed in the minor page fault test, which is provided by (https://lkml.org/lkml/2006/8/29/294). Average faults/sec of enabling NUMA with fake node was 5~6 % worse than disabling NUMA. To reduce this performance regression, fastpath is introduced. fastpath can skip the memory policy checking if NUMA is enabled but it uses fake node. If architecture doesn't support fake node, fastpath affects nothing for memory allocation path. Signed-off-by: Janghyuck Kim <janghyuck.kim@xxxxxxxxxxx> --- mm/internal.h | 4 ++++ mm/mempolicy.c | 3 +++ 2 files changed, 7 insertions(+) diff --git a/mm/internal.h b/mm/internal.h index 31ff935b2547..3b6c21814fbc 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -36,6 +36,10 @@ void page_writeback_init(void); vm_fault_t do_swap_page(struct vm_fault *vmf); +#ifndef numa_off_fastpath +#define numa_off_fastpath() false +#endif + void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma, unsigned long floor, unsigned long ceiling); diff --git a/mm/mempolicy.c b/mm/mempolicy.c index e32360e90274..21156671d941 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2152,6 +2152,9 @@ struct page *alloc_pages_vma(gfp_t gfp, int order, struct vm_area_struct *vma, int preferred_nid; nodemask_t *nmask; + if (numa_off_fastpath()) + return __alloc_pages_nodemask(gfp, order, 0, NULL); + pol = get_vma_policy(vma, addr); if (pol->mode == MPOL_INTERLEAVE) { -- 2.28.0