The patch titled Memoryless nodes: SLUB support has been removed from the -mm tree. Its filename was memoryless-nodes-slub-support.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ Subject: Memoryless nodes: SLUB support From: Christoph Lameter <clameter@xxxxxxx> Simply switch all for_each_online_node to for_each_node_state(NORMAL_MEMORY). That way SLUB only operates on nodes with regular memory. Any allocation attempt on a memoryless node or a node with just highmem will fall whereupon SLUB will fetch memory from a nearby node (depending on how memory policies and cpuset describe fallback). Signed-off-by: Christoph Lameter <clameter@xxxxxxx> Tested-by: Lee Schermerhorn <lee.schermerhorn@xxxxxx> Acked-by: Bob Picco <bob.picco@xxxxxx> Cc: Nishanth Aravamudan <nacc@xxxxxxxxxx> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> Cc: Mel Gorman <mel@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/slub.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff -puN mm/slub.c~memoryless-nodes-slub-support mm/slub.c --- a/mm/slub.c~memoryless-nodes-slub-support +++ a/mm/slub.c @@ -1921,7 +1921,7 @@ static void free_kmem_cache_nodes(struct { int node; - for_each_online_node(node) { + for_each_node_state(node, N_NORMAL_MEMORY) { struct kmem_cache_node *n = s->node[node]; if (n && n != &s->local_node) kmem_cache_free(kmalloc_caches, n); @@ -1939,7 +1939,7 @@ static int init_kmem_cache_nodes(struct else local_node = 0; - for_each_online_node(node) { + for_each_node_state(node, N_NORMAL_MEMORY) { struct kmem_cache_node *n; if (local_node == node) @@ -2192,7 +2192,7 @@ static inline int kmem_cache_close(struc flush_all(s); /* Attempt to free all objects */ - for_each_online_node(node) { + for_each_node_state(node, N_NORMAL_MEMORY) { struct kmem_cache_node *n = get_node(s, node); n->nr_partial -= free_list(s, n, &n->partial); @@ -2521,7 +2521,7 @@ int kmem_cache_shrink(struct kmem_cache return -ENOMEM; flush_all(s); - for_each_online_node(node) { + for_each_node_state(node, N_NORMAL_MEMORY) { n = get_node(s, node); if (!n->nr_partial) @@ -2916,7 +2916,7 @@ static long validate_slab_cache(struct k return -ENOMEM; flush_all(s); - for_each_online_node(node) { + for_each_node_state(node, N_NORMAL_MEMORY) { struct kmem_cache_node *n = get_node(s, node); count += validate_slab_node(s, n, map); @@ -3136,7 +3136,7 @@ static int list_locations(struct kmem_ca /* Push back cpu slabs */ flush_all(s); - for_each_online_node(node) { + for_each_node_state(node, N_NORMAL_MEMORY) { struct kmem_cache_node *n = get_node(s, node); unsigned long flags; struct page *page; @@ -3263,7 +3263,7 @@ static unsigned long slab_objects(struct } } - for_each_online_node(node) { + for_each_node_state(node, N_NORMAL_MEMORY) { struct kmem_cache_node *n = get_node(s, node); if (flags & SO_PARTIAL) { @@ -3291,7 +3291,7 @@ static unsigned long slab_objects(struct x = sprintf(buf, "%lu", total); #ifdef CONFIG_NUMA - for_each_online_node(node) + for_each_node_state(node, N_NORMAL_MEMORY) if (nodes[node]) x += sprintf(buf + x, " N%d=%lu", node, nodes[node]); _ Patches currently in -mm which might be from clameter@xxxxxxx are origin.patch pa-risc-use-page-allocator-instead-of-slab-allocator.patch dma-use-dev_to_node-to-get-node-for-device-in-dma_alloc_pages.patch x86-fix-cpu_to_node-references.patch x86-convert-x86_cpu_to_apicid-to-be-a-per-cpu-variable.patch x86-convert-cpu_llc_id-to-be-a-per-cpu-variable.patch x86-acpi-use-cpu_physical_id.patch x86-convert-cpuinfo_x86-array-to-a-per_cpu-array.patch slub-simplify-irq-off-handling.patch slab-api-remove-useless-ctor-parameter-and-reorder-parameters.patch slab-api-remove-useless-ctor-parameter-and-reorder-parameters-fix.patch slab-api-remove-useless-ctor-parameter-and-reorder-parameters-fix-2.patch slab-api-remove-useless-ctor-parameter-and-reorder-parameters-vs-unionfs.patch oom-move-prototypes-to-appropriate-header-file.patch oom-move-constraints-to-enum.patch oom-change-all_unreclaimable-zone-member-to-flags.patch oom-change-all_unreclaimable-zone-member-to-flags-fix.patch oom-add-per-zone-locking.patch oom-serialize-out-of-memory-calls.patch oom-add-oom_kill_allocating_task-sysctl.patch oom-suppress-extraneous-stack-and-memory-dump.patch oom-compare-cpuset-mems_allowed-instead-of-exclusive.patch oom-do-not-take-callback_mutex.patch oom-do-not-take-callback_mutex-fix.patch oom-prevent-including-schedh-in-header-file.patch oom-add-header-file-to-kbuild-as-unifdef.patch oom-convert-zone_scan_lock-from-mutex-to-spinlock.patch mm-test-and-set-zone-reclaim-lock-before-starting.patch mm-test-and-set-zone-reclaim-lock-before-starting-cleanup.patch avoid-negative-and-full-width-shifts-in-radix-treec.patch cpu-hotplug-slab-cleanup-cpuup_callback.patch cpu-hotplug-slab-fix-memory-leak-in-cpu-hotplug-error-path.patch intel-iommu-dmar-detection-and-parsing-logic.patch intel-iommu-pci-generic-helper-function.patch intel-iommu-clflush_cache_range-now-takes-size-param.patch intel-iommu-iova-allocation-and-management-routines.patch intel-iommu-intel-iommu-driver.patch intel-iommu-avoid-memory-allocation-failures-in-dma-map-api-calls.patch intel-iommu-intel-iommu-cmdline-option-forcedac.patch intel-iommu-dmar-fault-handling-support.patch intel-iommu-iommu-gfx-workaround.patch intel-iommu-iommu-floppy-workaround.patch revoke-core-code.patch slab-api-remove-useless-ctor-parameter-and-reorder-parameters-vs-revoke.patch documentation-vm-slabinfoc-clean-up-this-code.patch cpuset-zero-malloc-revert-the-old-cpuset-fix.patch memcontrol-move-oom-task-exclusion-to-tasklist.patch memcontrol-move-oom-task-exclusion-to-tasklist-fix.patch oom-add-sysctl-to-enable-task-memory-dump.patch hotplug-cpu-migrate-a-task-within-its-cpuset.patch hotplug-cpu-migrate-a-task-within-its-cpuset-fix.patch hotplug-cpu-migrate-a-task-within-its-cpuset-doc.patch bit_spin_lock-use-lock-bitops.patch ext3-support-large-blocksize-up-to-pagesize.patch slab-api-remove-useless-ctor-parameter-and-reorder-parameters-vs-reiser4.patch page-owner-tracking-leak-detector.patch - To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html