On 27.01.2014 [14:58:05 +0900], Joonsoo Kim wrote: > On Fri, Jan 24, 2014 at 05:10:42PM -0800, Nishanth Aravamudan wrote: > > On 24.01.2014 [16:25:58 -0800], David Rientjes wrote: > > > On Fri, 24 Jan 2014, Nishanth Aravamudan wrote: > > > > > > > Thank you for clarifying and providing a test patch. I ran with this on > > > > the system showing the original problem, configured to have 15GB of > > > > memory. > > > > > > > > With your patch after boot: > > > > > > > > MemTotal: 15604736 kB > > > > MemFree: 8768192 kB > > > > Slab: 3882560 kB > > > > SReclaimable: 105408 kB > > > > SUnreclaim: 3777152 kB > > > > > > > > With Anton's patch after boot: > > > > > > > > MemTotal: 15604736 kB > > > > MemFree: 11195008 kB > > > > Slab: 1427968 kB > > > > SReclaimable: 109184 kB > > > > SUnreclaim: 1318784 kB > > > > > > > > > > > > I know that's fairly unscientific, but the numbers are reproducible. > > > > > > Hello, > > I think that there is one mistake on David's patch although I'm not sure > that it is the reason for this result. > > With David's patch, get_partial() in new_slab_objects() doesn't work > properly, because we only change node id in !node_match() case. If we > meet just !freelist case, we pass node id directly to > new_slab_objects(), so we always try to allocate new slab page > regardless existence of partial pages. We should solve it. > > Could you try this one? This helps about the same as David's patch -- but I found the reason why! ppc64 doesn't set CONFIG_HAVE_MEMORYLESS_NODES :) Expect a patch shortly for that and one other case I found. This patch on its own seems to help on our test system by saving around 1.5GB of slab. Tested-by: Nishanth Aravamudan <nacc@xxxxxxxxxxxxxxxxxx> Acked-by: Nishanth Aravamudan <nacc@xxxxxxxxxxxxxxxxxx> with the caveat below. Thanks, Nish > > Thanks. > > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -1698,8 +1698,10 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node, > struct kmem_cache_cpu *c) > { > void *object; > - int searchnode = (node == NUMA_NO_NODE) ? numa_node_id() : node; > + int searchnode = (node == NUMA_NO_NODE) ? numa_mem_id() : node; > > + if (node != NUMA_NO_NODE && !node_present_pages(node)) > + searchnode = numa_mem_id(); This might be clearer as: int searchnode = node; if (node == NUMA_NO_NODE || !node_present_pages(node)) searchnode = numa_mem_id(); > object = get_partial_node(s, get_node(s, searchnode), c, flags); > if (object || node != NUMA_NO_NODE) > return object; > @@ -2278,10 +2280,14 @@ redo: > > if (unlikely(!node_match(page, node))) { > stat(s, ALLOC_NODE_MISMATCH); > - deactivate_slab(s, page, c->freelist); > - c->page = NULL; > - c->freelist = NULL; > - goto new_slab; > + if (unlikely(!node_present_pages(node))) > + node = numa_mem_id(); > + if (!node_match(page, node)) { > + deactivate_slab(s, page, c->freelist); > + c->page = NULL; > + c->freelist = NULL; > + goto new_slab; > + } > } > > /* > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>