On Thu, 2011-09-15 at 13:40 +0800, Pekka Enberg wrote: > On Thu, Sep 8, 2011 at 5:24 AM, Alex,Shi <alex.shi@xxxxxxxxx> wrote: > >> > BTW, some testing results for your PCP SLUB: > >> > > >> > for hackbench process testing: > >> > on WSM-EP, inc ~60%, NHM-EP inc ~25% > >> > on NHM-EX, inc ~200%, core2-EP, inc ~250%. > >> > on Tigerton-EX, inc 1900%, :) > >> > > >> > for hackbench thread testing: > >> > on WSM-EP, no clear inc, NHM-EP no clear inc > >> > on NHM-EX, inc 10%, core2-EP, inc ~20%. > >> > on Tigertion-EX, inc 100%, > >> > > >> > for netperf loopback testing, no clear performance change. > >> did you add my patch to add page to partial list tail in the test? > >> Without it the per-cpu partial list can have more significant impact to > >> reduce lock contention, so the result isn't precise. > >> > > > > No, the penberg tree did include your patch on slub/partial head. > > Actually PCP won't take that path, so, there is no need for your patch. > > I daft a patch to remove some unused code in __slab_free, that related > > this, and will send it out later. > > Which patch is that? Please send me it to penberg@xxxxxxxxxxxxxx as > @kernel.org email forward isn't working. Ops, this thread mentioned 2 patches, 1, shaohua's bug fixing patch, that already in your tree as 'slub/urgent head', if my memory service me right. 2, [PATCH] slub Discard slab page only when node partials > minimum setting, that is the following. ---------- From: Alex Shi <alex.shi@xxxxxxxxx> Date: Tue, 6 Sep 2011 14:46:01 +0800 Subject: [PATCH ] Discard slab page when node partial > mininum partial number Discarding slab should be done when node partial > min_partial. Otherwise, node partial slab may eat up all memory. Signed-off-by: Alex Shi <alex.shi@xxxxxxxxx> Acked-by: Christoph Lameter <cl@xxxxxxxxx> --- mm/slub.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 1348c09..492beab 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1953,7 +1953,7 @@ static void unfreeze_partials(struct kmem_cache *s) new.frozen = 0; - if (!new.inuse && (!n || n->nr_partial < s->min_partial)) + if (!new.inuse && (!n || n->nr_partial > s->min_partial)) m = M_FREE; else { struct kmem_cache_node *n2 = get_node(s, -- 1.7.0 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>