On Thu, 2011-09-08 at 08:43 +0800, Shi, Alex wrote: > On Wed, 2011-09-07 at 23:05 +0800, Christoph Lameter wrote: > > On Wed, 7 Sep 2011, Shi, Alex wrote: > > > > > Oh, seems the deactivate_slab() corrected at linus' tree already, but > > > the unfreeze_partials() just copied from the old version > > > deactivate_slab(). > > > > Ok then the patch is ok. > > > > Do you also have performance measurements? I am a bit hesitant to merge > > the per cpu partials patchset if there are regressions in the low > > concurrency tests as seem to be indicated by intels latest tests. > > > > My LKP testing system most focus on server platforms. I tested your per > cpu partial set on hackbench and netperf loopback benchmark. hackbench > improve much. > > Maybe some IO testing is low concurrency for SLUB, maybe a few jobs > kbuild? or low swap press testing. I may try them for your patchset in > the near days. > > BTW, some testing results for your PCP SLUB: > > for hackbench process testing: > on WSM-EP, inc ~60%, NHM-EP inc ~25% > on NHM-EX, inc ~200%, core2-EP, inc ~250%. > on Tigerton-EX, inc 1900%, :) > > for hackbench thread testing: > on WSM-EP, no clear inc, NHM-EP no clear inc > on NHM-EX, inc 10%, core2-EP, inc ~20%. > on Tigertion-EX, inc 100%, > > for netperf loopback testing, no clear performance change. did you add my patch to add page to partial list tail in the test? Without it the per-cpu partial list can have more significant impact to reduce lock contention, so the result isn't precise. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>