Re: [LSF/MM/BPF TOPIC] SLOB+SLAB allocators removal and future SLUB improvements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Blah, sorry, lets try this.
https://docs.google.com/spreadsheets/d/e/2PACX-1vS1uiw85AIpzgcVlvNlDCD9PuCIubiaJvBrKIC5OyAQURZHogOuCtpFNsC-zGHZ4-XNKJVcGgkpL-KH/pubhtml

On Wed, Mar 22, 2023 at 9:02 AM Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx> wrote:
>
> On Wed, Mar 22, 2023 at 08:15:28AM -0400, Binder Makin wrote:
> > Was looking at SLAB removal and started by running A/B tests of SLAB vs
> > SLUB.  Please note these are only preliminary results.
> >
> > These were run using 6.1.13 configured for SLAB/SLUB.
> > Machines were standard datacenter servers.
> >
> > Hackbench shows completion time, so smaller is better.
> > On all others larger is better.
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vQ47Mekl8BOp3ekCefwL6wL8SQiv6Qvp5avkU2ssQSh41gntjivE-aKM4PkwzkC4N_s_MxUdcsokhhz/pubhtml
> >
> > Some notes:
> > SUnreclaim and SReclaimable shows unreclaimable and reclaimable memory.
> > Substantially higher with SLUB, but I believe that is to be expected.
> >
> > Various results showing a 5-10% degradation with SLUB.  That feels
> > concerning to me, but I'm not sure what others' tolerance would be.
>
> Hello Binder,
>
> Thank you for sharing the data on which workloads
> SLUB performs worse than SLAB. This information is critical for
> improving SLUB and deprecating SLAB.
>
> By the way, it appears that the spreadsheet is currently set to private.
> Could you make it public for me to access?
>
> I am really interested in performing similar experiments on my machines
> to obtain comparable data that can be utilized to enhance SLUB.
>
> Thanks,
> Hyeonggon
>
> > redis results on AMD show some pretty bad degredations.  10-20% range
> > netpipe on Intel also has issues.. 10-17%
> >
> > On Tue, Mar 14, 2023 at 4:05 AM Vlastimil Babka <vbabka@xxxxxxx> wrote:
> >
> > > As you're probably aware, my plan is to get rid of SLOB and SLAB, leaving
> > > only SLUB going forward. The removal of SLOB seems to be going well, there
> > > were no objections to the deprecation and I've posted v1 of the removal
> > > itself [1] so it could be in -next soon.
> > >
> > > The immediate benefit of that is that we can allow kfree() (and
> > > kfree_rcu())
> > > to free objects from kmem_cache_alloc() - something that IIRC at least xfs
> > > people wanted in the past, and SLOB was incompatible with that.
> > >
> > > For SLAB removal I haven't yet heard any objections (but also didn't
> > > deprecate it yet) but if there are any users due to particular workloads
> > > doing better with SLAB than SLUB, we can discuss why those would regress
> > > and
> > > what can be done about that in SLUB.
> > >
> > > Once we have just one slab allocator in the kernel, we can take a closer
> > > look at what the users are missing from it that forces them to create own
> > > allocators (e.g. BPF), and could be considered to be added as a generic
> > > implementation to SLUB.
> > >
> > > Thanks,
> > > Vlastimil
> > >
> > > [1] https://lore.kernel.org/all/20230310103210.22372-1-vbabka@xxxxxxx/




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux