Re: [PATCH 1/3] mm/slub: directly load freelist from cpu partial slab in the likely case

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/1/18 06:41, Christoph Lameter (Ampere) wrote:
> On Wed, 17 Jan 2024, Chengming Zhou wrote:
> 
>> The likely case is that we get a usable slab from the cpu partial list,
>> we can directly load freelist from it and return back, instead of going
>> the other way that need more work, like reenable interrupt and recheck.
> 
> Ok I see that it could be useful to avoid the unlock_irq/lock_irq sequence in the partial cpu handling.

Right.

> 
>> But we need to remove the "VM_BUG_ON(!new.frozen)" in get_freelist()
>> for reusing it, since cpu partial slab is not frozen. It seems
>> acceptable since it's only for debug purpose.
> 
> This is test for verification that the newly acquired slab is actually in frozen status. If that test is no longer necessary then this is a bug that may need to be fixed independently. Maybe this test is now required to be different depending on where the partial slab originated from? Check only necessary when taken from the per node partials?

Now there are two similar functions: get_freelist() and freeze_slab().

get_freelist() is used for the cpu slab, will transfer the freelist to
the cpu freelist, so there is "VM_BUG_ON(!new.frozen)" in it, since the
cpu slab must be frozen already.

freeze_slab() is used for slab got from node partial list, will be frozen
and get the freelist from it before using as the cpu slab. So it has the
"VM_BUG_ON(new.frozen)" in it since the partial slab must NOT be frozen.
And it doesn't need the cpu_slab lock.

This patch handles the third case: slab on cpu partial list, which
already held the cpu_slab lock, so change to reuse get_freelist() from
freeze_slab().

So get_freelist() has two cases to handle: cpu slab and cpu partial list slab.
The latter is NOT frozen, so need to remove "VM_BUG_ON(!new.frozen)" from it.

And "VM_BUG_ON(new.frozen)" in freeze_slab() is unchanged, so per node partials
are covered.

Thanks!

> 
>> There is some small performance improvement too, which shows by:
>> perf bench sched messaging -g 5 -t -l 100000
>>
>>            mm-stable   slub-optimize
>> Total time      7.473    7.209
> 
> Hmm... Good avoiding the lock/relock sequence helps.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux