Re: [PATCH] fs: switch timespec64 fields in inode to discrete integers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 17, 2024 at 08:08:40PM -0400, Jeff Layton wrote:
> For reference (according to pahole):
> 
>     Before:	/* size: 624, cachelines: 10, members: 53 */
>     After: 	/* size: 616, cachelines: 10, members: 56 */

Smaller is always better, but for a meaningful improvement, we'd want
to see more.  On my laptop running a Debian 6.6.15 kernel, I see:

inode_cache        11398  11475    640   25    4 : tunables    0    0    0 : slabdata    459    459      0

so there's 25 inodes per 4 pages.  Going down to 632 is still 25 per 4
pages.  At 628 bytes, we get 26 per 4 pages.  Ar 604 bytes, we're at 27.
And at 584 bytes, we get 28.

Of course, struct inode gets embedded in a lot of filesystem inodes.
xfs_inode         142562 142720   1024   32    8 : tunables    0    0    0 : slabdata   4460   4460      0
ext4_inode_cache      81     81   1184   27    8 : tunables    0    0    0 : slabdata      3      3      0
sock_inode_cache    2123   2223    832   39    8 : tunables    0    0    0 : slabdata     57     57      0

So any of them might cross a magic boundary where we suddenly get more
objects per slab.

Not trying to diss the work you've done here, just pointing out the
limits for anyone who's trying to do something similar.  Or maybe
inspire someone to do more reductions ;-)




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux