On Tue, 2 Aug 2011, David Rientjes wrote: > On Tue, 2 Aug 2011, Christoph Lameter wrote: > > > The per cpu partial lists only add the need for more memory if other > > processors have to allocate new pages because they do not have enough > > partial slab pages to satisfy their needs. That can be tuned by a cap on > > objects. > > > > The netperf benchmark isn't representative of a heavy slab consuming > workload, I routinely run jobs on these machines that use 20 times the > amount of slab. From what I saw in the earlier posting of the per-cpu > partial list patch, the min_partial value is set to half of what it was > previously as a per-node partial list. Since these are 16-core, 4 node > systems, that would mean that after a kmem_cache_shrink() on a cache that > leaves empty slab on the partial lists that we've doubled the memory for > slub's partial lists systemwide. Cutting down the potential number of empty slabs that we might possible keep around because we have no partial slabs per node increases memory usage? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>