Re: [PATCH 0/1] mm: Remove the SLAB allocator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 4/11/19 10:55 AM, Michal Hocko wrote:
Please please have it more rigorous then what happened when SLUB was
forced to become a default

This is the hard part.

Even if you are able to show that SLUB is as fast as SLAB for all the benchmarks you run, there's bound to be that one workload where SLUB regresses. You will then have people complaining about that (rightly so) and you're again stuck with two allocators.

To move forward, I think we should look at possible *pathological* cases where we think SLAB might have an advantage. For example, SLUB had much more difficulties with remote CPU frees than SLAB. Now I don't know if this is the case, but it should be easy to construct a synthetic benchmark to measure this.

For example, have a userspace process that does networking, which is often memory allocation intensive, so that we know that SKBs traverse between CPUs. You can do this by making sure that the NIC queues are mapped to CPU N (so that network softirqs have to run on that CPU) but the process is pinned to CPU M.

It's, of course, worth thinking about other pathological cases too. Workloads that cause large allocations is one. Workloads that cause lots of slab cache shrinking is another.

- Pekka




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux