On Thu, 2 Jul 2020, Xunlei Pang wrote: > This patch introduces two counters to maintain the actual number > of partial objects dynamically instead of iterating the partial > page lists with list_lock held. > > New counters of kmem_cache_node are: pfree_objects, ptotal_objects. > The main operations are under list_lock in slow path, its performance > impact is minimal. If at all then these counters need to be under CONFIG_SLUB_DEBUG. > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -616,6 +616,8 @@ struct kmem_cache_node { > #ifdef CONFIG_SLUB > unsigned long nr_partial; > struct list_head partial; > + atomic_long_t pfree_objects; /* partial free objects */ > + atomic_long_t ptotal_objects; /* partial total objects */ Please in the CONFIG_SLUB_DEBUG. Without CONFIG_SLUB_DEBUG we need to build with minimal memory footprint. > #ifdef CONFIG_SLUB_DEBUG > atomic_long_t nr_slabs; > atomic_long_t total_objects; > diff --git a/mm/slub.c b/mm/slub.c Also this looks to be quite heavy on the cache and on execution time. Note that the list_lock could be taken frequently in the performance sensitive case of freeing an object that is not in the partial lists.