Re: Differences between SLUB/SLAB/SLOB/SLQB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you Mulyadi, I have read another article (with graphical display):

http://lwn.net/Articles/311502/?format=printable

Quite readable article.   Thanks for the sharing!!!!

On Thu, Apr 2, 2009 at 12:37 AM, Mulyadi Santosa
<mulyadi.santosa@xxxxxxxxx> wrote:
> On Wed, Apr 1, 2009 at 10:08 AM, Peter Teoh <htmldeveloper@xxxxxxxxx> wrote:
>> based on http://lwn.net/Articles/229984/, which explained the
>> differences in some ways, my understanding is that SLUB is targetting
>> for large number of CPUs/nodes.   For machines like 2 or 4 cores, it
>> should not matters much, correct?   (or possibly higher overheads in
>> some scenario?)
>
> I think, SLUB is created primarily to deal with NUMA machines (I also
> read this from your mentioned LWN page). Thus, it maintains locality
> as good as it could. The same page also mentions "preventing cache
> line bouncing", which convince me more about my vague conclusion
>
> About SLOB, I believe it's targetting embedded device, as written in
> this page http://lwn.net/Articles/157944/. SLOB implementation is way
> simpler compared to SLAB and the cache management itself, at least
> after I read the code briefly, is simple. So the key words here are:
> small code, small memory footprints.
>
> Now....hmm, SLQB ... i hit google and found this
> http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-01/msg06324.html.
>
> From this paragraph:
> "SLQB: A slab allocator that focuses on per-CPU scaling, and good performance
>  * with order-0 allocations. Fast paths emphasis is placed on local allocaiton
>  * and freeing, but with a secondary goal of good remote freeing
> (freeing onthat allocation is done in FIFO style
>  * another CPU from that which allocated)"
>
> I draw another (sorry peter :) ) vague conclusion that it manages to
> optimize the fast paths when dealing with local allocation. Why?
> probably because Nick Piggin thinks the other allocators are not doing
> good in this aspect, while most application would certainly be given
> free pages for the same node. Another keywords that I read on that
> page is the term "using queue", which lead me think that the queue
> might be implemented on per CPU per node basis. Otherwise, scalability
> won't be good IMO.
>
> What do you guys think?
>
> regards,
>
> Mulyadi.
>



-- 
Regards,
Peter Teoh

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ



[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux