The patch titled slab: fix kmalloc_node applying memory policies if nodeid == numa_node_id() has been added to the -mm tree. Its filename is slab-fix-kmalloc_node-applying-memory-policies-if-nodeid-==-numa_node_id.patch See http://www.zip.com.au/~akpm/linux/patches/stuff/added-to-mm.txt to find out what to do about this ------------------------------------------------------ Subject: slab: fix kmalloc_node applying memory policies if nodeid == numa_node_id() From: Christoph Lameter <clameter@xxxxxxx> kmalloc_node() falls back to ___cache_alloc() under certain conditions and at that point memory policies may be applied redirecting the allocation away from the current node. Therefore kmalloc_node(...,numa_node_id()) or kmalloc_node(...,-1) may not return memory from the local node. Fix this by doing the policy check in __cache_alloc() instead of ____cache_alloc(). This version here is a cleanup of Kiran's patch. - Tested on ia64. - Extra material removed. - Consolidate the exit path if alternate_node_alloc() returned an object. Signed-off-by: Alok N Kataria <alok.kataria@xxxxxxxxxxxxxx> Signed-off-by: Ravikiran Thirumalai <kiran@xxxxxxxxxxxx> Signed-off-by: Shai Fultheim <shai@xxxxxxxxxxxx> Signed-off-by: Christoph Lameter <clameter@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxx> --- mm/slab.c | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff -puN mm/slab.c~slab-fix-kmalloc_node-applying-memory-policies-if-nodeid-==-numa_node_id mm/slab.c --- a/mm/slab.c~slab-fix-kmalloc_node-applying-memory-policies-if-nodeid-==-numa_node_id +++ a/mm/slab.c @@ -3030,14 +3030,6 @@ static inline void *____cache_alloc(stru void *objp; struct array_cache *ac; -#ifdef CONFIG_NUMA - if (unlikely(current->flags & (PF_SPREAD_SLAB | PF_MEMPOLICY))) { - objp = alternate_node_alloc(cachep, flags); - if (objp != NULL) - return objp; - } -#endif - check_irq_off(); ac = cpu_cache_get(cachep); if (likely(ac->avail)) { @@ -3060,7 +3052,17 @@ static __always_inline void *__cache_all cache_alloc_debugcheck_before(cachep, flags); local_irq_save(save_flags); + +#ifdef CONFIG_NUMA + if (unlikely(current->flags & (PF_SPREAD_SLAB | PF_MEMPOLICY))) { + objp = alternate_node_alloc(cachep, flags); + if (objp != NULL) + goto out; + } +#endif + objp = ____cache_alloc(cachep, flags); +out: local_irq_restore(save_flags); objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); _ Patches currently in -mm which might be from clameter@xxxxxxx are fix-longstanding-load-balancing-bug-in-the-scheduler.patch cleanup-radix_tree_derefreplace_slot-calling-conventions-warning-fixes.patch reduce-max_nr_zones-remove-two-strange-uses-of-max_nr_zones.patch reduce-max_nr_zones-fix-max_nr_zones-array-initializations.patch reduce-max_nr_zones-make-display-of-highmem-counters-conditional-on-config_highmem.patch reduce-max_nr_zones-make-display-of-highmem-counters-conditional-on-config_highmem-tidy.patch reduce-max_nr_zones-move-highmem-counters-into-highmemc-h.patch reduce-max_nr_zones-move-highmem-counters-into-highmemc-h-fix.patch reduce-max_nr_zones-page-allocator-zone_highmem-cleanup.patch reduce-max_nr_zones-use-enum-to-define-zones-reformat-and-comment.patch reduce-max_nr_zones-use-enum-to-define-zones-reformat-and-comment-cleanup.patch reduce-max_nr_zones-make-zone_dma32-optional.patch reduce-max_nr_zones-make-zone_highmem-optional.patch reduce-max_nr_zones-make-zone_highmem-optional-fix.patch reduce-max_nr_zones-make-zone_highmem-optional-fix-fix.patch reduce-max_nr_zones-remove-display-of-counters-for-unconfigured-zones.patch reduce-max_nr_zones-fix-i386-srat-check-for-max_nr_zones.patch mempolicies-fix-policy_zone-check.patch apply-type-enum-zone_type.patch apply-type-enum-zone_type-fix.patch linearly-index-zone-node_zonelists.patch slab-respect-architecture-and-caller-mandated-alignment.patch slab-optimize-kmalloc_node-the-same-way-as-kmalloc.patch slab-optimize-kmalloc_node-the-same-way-as-kmalloc-fix.patch slab-extract-__kmem_cache_destroy-from-kmem_cache_destroy.patch slab-do-not-panic-when-alloc_kmemlist-fails-and-slab-is-up.patch add-__gfp_thisnode-to-avoid-fallback-to-other-nodes-and-ignore.patch add-__gfp_thisnode-to-avoid-fallback-to-other-nodes-and-ignore-fix.patch sys_move_pages-do-not-fall-back-to-other-nodes.patch guarantee-that-the-uncached-allocator-gets-pages-on-the-correct.patch cleanup-add-zone-pointer-to-get_page_from_freelist.patch profiling-require-buffer-allocation-on-the-correct-node.patch define-easier-to-handle-gfp_thisnode.patch optimize-free_one_page.patch do-not-check-unpopulated-zones-for-draining-and-counter.patch extract-the-allocpercpu-functions-from-the-slab-allocator.patch replace-min_unmapped_ratio-by-min_unmapped_pages-in-struct-zone.patch zvc-support-nr_slab_reclaimable--nr_slab_unreclaimable.patch zone_reclaim-dynamic-slab-reclaim.patch zone_reclaim-dynamic-slab-reclaim-tidy.patch zone-reclaim-with-slab-avoid-unecessary-off-node-allocations.patch hugepages-use-page_to_nid-rather-than-traversing-zone-pointers.patch numa-add-zone_to_nid-function.patch numa-add-zone_to_nid-function-update.patch slab-fix-kmalloc_node-applying-memory-policies-if-nodeid-==-numa_node_id.patch x86-implement-always-locked-bit-ops-for-memory-shared-with-an-smp-hypervisor.patch zvc-support-nr_slab_reclaimable--nr_slab_unreclaimable-swap_prefetch.patch reduce-max_nr_zones-swap_prefetch-remove-incorrect-use-of-zone_highmem.patch numa-add-zone_to_nid-function-swap_prefetch.patch readahead-state-based-method-aging-accounting-apply-type-enum-zone_type-readahead.patch - To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html