Re: [PATCH RFC} io_uring: io_kiocb alloc cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+slab allocator people

On Wed, May 13, 2020 at 6:30 PM Jens Axboe <axboe@xxxxxxxxx> wrote:
> I turned the quick'n dirty from the other day into something a bit
> more done. Would be great if someone else could run some performance
> testing with this, I get about a 10% boost on the pure NOP benchmark
> with this. But that's just on my laptop in qemu, so some real iron
> testing would be awesome.

10% boost compared to which allocator? Are you using CONFIG_SLUB?

> The idea here is to have a percpu alloc cache. There's two sets of
> state:
>
> 1) Requests that have IRQ completion. preempt disable is not enough
>    there, we need to disable local irqs. This is a lot slower in
>    certain setups, so we keep this separate.
>
> 2) No IRQ completion, we can get by with just disabling preempt.

The SLUB allocator has percpu caching, too, and as long as you don't
enable any SLUB debugging or ASAN or such, and you're not hitting any
slowpath processing, it doesn't even have to disable interrupts, it
gets away with cmpxchg_double.

Have you profiled what the actual problem is when using SLUB? Have you
tested with CONFIG_SLAB_FREELIST_HARDENED turned off,
CONFIG_SLUB_DEBUG turned off, CONFIG_TRACING turned off,
CONFIG_FAILSLAB turned off, and so on? As far as I know, if you
disable all hardening and debugging infrastructure, SLUB's
kmem_cache_alloc()/kmem_cache_free() on the fastpaths should be really
straightforward. And if you don't turn those off, the comparison is
kinda unfair, because your custom freelist won't respect those flags.

When you build custom allocators like this, it interferes with
infrastructure meant to catch memory safety issues and such (both pure
debugging code and safety checks meant for production use) - for
example, ASAN and memory tagging will no longer be able to detect
use-after-free issues in objects managed by your custom allocator
cache.

So please, don't implement custom one-off allocators in random
subsystems. And if you do see a way to actually improve the
performance of memory allocation, add that to the generic SLUB
infrastructure.

> Outside of that, any freed requests goes to the ce->alloc_list.
> Attempting to alloc a request will check there first. When freeing
> a request, if we're over some threshold, move requests to the
> ce->free_list. This list can be browsed by the shrinker to free
> up memory. If a CPU goes offline, all requests are reaped.
>
> That's about it. If we go further with this, it'll be split into
> a few separate patches. For now, just throwing this out there
> for testing. The patch is against my for-5.8/io_uring branch.

That branch doesn't seem to exist on
<https://git.kernel.dk/cgit/linux-block/>...




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux