Re: [Q] Default SLAB allocator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David,

On Mon, Oct 15, 2012 at 9:46 PM, David Rientjes <rientjes@xxxxxxxxxx> wrote:
> On Sat, 13 Oct 2012, Ezequiel Garcia wrote:
>
>> But SLAB suffers from a lot more internal fragmentation than SLUB,
>> which I guess is a known fact. So memory-constrained devices
>> would waste more memory by using SLAB.
>
> Even with slub's per-cpu partial lists?

I'm not considering that, but rather plain fragmentation: the difference
between requested and allocated, per object.
Admittedly, perhaps this is a naive analysis.

However, devices where this matters would have only one cpu, right?
So the overhead imposed by per-cpu data shouldn't impact so much.

Study the difference in overhead imposed by allocators is
something that's still on my TODO.

Now, returning to the fragmentation. The problem with SLAB is that
its smaller cache available for kmalloced objects is 32 bytes;
while SLUB allows 8, 16, 24 ...

Perhaps adding smaller caches to SLAB might make sense?
Is there any strong reason for NOT doing this?

Thanks,

    Ezequiel

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]