Re: [RFC 0/2] guarantee natural alignment for kmalloc()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/21/19 3:23 AM, Matthew Wilcox wrote:
> On Wed, Mar 20, 2019 at 10:48:03PM +0100, Vlastimil Babka wrote:
>>
>> Well, looks like that's what happens. This is with SLAB, but the alignment
>> calculations should be common: 
>>
>> slabinfo - version: 2.1
>> # name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
>> kmalloc-96          2611   4896    128   32    1 : tunables  120   60    8 : slabdata    153    153      0
>> kmalloc-128         4798   5536    128   32    1 : tunables  120   60    8 : slabdata    173    173      0
> 
> Hmm.  On my laptop, I see:
> 
> kmalloc-96         28050  35364     96   42    1 : tunables    0    0    0 : slabdata    842    842      0
> 
> That'd take me from 842 * 4k pages to 1105 4k pages -- an extra megabyte of
> memory.
> 
> This is running Debian's 4.19 kernel:
> 
> # CONFIG_SLAB is not set
> CONFIG_SLUB=y

Ah, you're right. SLAB creates kmalloc caches with:

#ifndef ARCH_KMALLOC_FLAGS
#define ARCH_KMALLOC_FLAGS SLAB_HWCACHE_ALIGN
#endif

create_kmalloc_caches(ARCH_KMALLOC_FLAGS);

While SLUB just:

create_kmalloc_caches(0);

even though it uses SLAB_HWCACHE_ALIGN for kmem_cache_node and
kmem_cache caches.





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux