On Wed, 2008-08-13 at 23:14 +0900, KOSAKI Motohiro wrote: > >> :t-0000128 28739 128 1.3G 20984/20984/8 512 0 99 0 * > > > > Argh. Most slabs contain a single object. Probably due to the conflict resolution. > > agreed with the issue exist in lock contention code. > > > > The obvious fix is to avoid allocating another slab on conflict but how will > > this impact performance? > > > > > > Index: linux-2.6/mm/slub.c > > =================================================================== > > --- linux-2.6.orig/mm/slub.c 2008-08-13 08:06:00.000000000 -0500 > > +++ linux-2.6/mm/slub.c 2008-08-13 08:07:59.000000000 -0500 > > @@ -1253,13 +1253,11 @@ > > static inline int lock_and_freeze_slab(struct kmem_cache_node *n, > > struct page *page) > > { > > - if (slab_trylock(page)) { > > - list_del(&page->lru); > > - n->nr_partial--; > > - __SetPageSlubFrozen(page); > > - return 1; > > - } > > - return 0; > > + slab_lock(page); > > + list_del(&page->lru); > > + n->nr_partial--; > > + __SetPageSlubFrozen(page); > > + return 1; > > } > > I don't mesure it yet. I don't like this patch. > maybe, it decrease other typical benchmark. > > So, I think better way is > > 1. slab_trylock(), if success goto 10. > 2. check fragmentation ratio, if low goto 10 > 3. slab_lock() > 10. return func > > I think this way doesn't cause performance regression. > because high fragmentation cause defrag and compaction lately. > So, prevent fragmentation often increase performance. > > Thought? I guess that would work. But how exactly would you quantify "fragmentation ratio?" -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html