Hi Christopher, On Tue, Aug 11, 2020 at 3:52 PM Christopher Lameter <cl@xxxxxxxxx> wrote: > > On Fri, 7 Aug 2020, Pekka Enberg wrote: > > > Why do you consider this to be a fast path? This is all partial list > > accounting when we allocate/deallocate a slab, no? Just like > > ___slab_alloc() says, I assumed this to be the slow path... What am I > > missing? > > I thought these were per object counters? If you just want to count the > number of slabs then you do not need the lock at all. We already have a > counter for the number of slabs. The patch attempts to speed up count_partial(), which holds on to the "n->list_lock" (with IRQs off) for the whole duration it takes to walk the partial slab list: spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) x += get_count(page); spin_unlock_irqrestore(&n->list_lock, flags); It's counting the number of *objects*, but the counters are only updated in bulk when we add/remove a slab to/from the partial list. The counter updates are therefore *not* in the fast-path AFAICT. Xunlei, please correct me if I'm reading your patches wrong. On Tue, Aug 11, 2020 at 3:52 PM Christopher Lameter <cl@xxxxxxxxx> wrote: > > No objections to alternative fixes, of course, but wrapping the > > counters under CONFIG_DEBUG seems like just hiding the actual issue... > > CONFIG_DEBUG is on by default. It just compiles in the debug code and > disables it so we can enable it with a kernel boot option. This is because > we have had numerous issues in the past with "production" kernels that > could not be recompiled with debug options. So just running the prod > kernel with another option will allow you to find hard to debug issues in > a full scale producton deployment with potentially proprietary modules > etc. Yeah, it's been too long since I last looked at the code and did not realize even count_partial() is wrapped in CONFIG_DEBUG. So by all means, let's also wrap the counters with that too. - Pekka