[PATCH] fs, pseudo: Do not update atime for pseudo inodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The kernel uses internal mounts created by kern_mount() and populated
with files with no lookup path by alloc_file_pseudo() for a variety of
reasons. An relevant example is anonymous pipes because every vfs_write
also checks if atime needs to be updated even though it is unnecessary.
Most of the relevant users for alloc_file_pseudo() either have no statfs
helper or use simple_statfs which does not return st_atime. The closest
proxy measure is the proc fd representations of such inodes which do not
appear to change once they are created. This patch sets the S_NOATIME
on inode->i_flags for inodes created by new_inode_pseudo() so that atime
will not be updated.

The test motivating this was "perf bench sched messaging --pipe" where
atime-related functions were noticeable in the profiles. On a single-socket
machine using threads the difference of the patch was as follows. The
difference in performance was

                          5.8.0-rc1              5.8.0-rc1
                            vanilla       pseudoatime-v1r1
Amean     1       0.4807 (   0.00%)      0.4623 *   3.81%*
Amean     3       1.5543 (   0.00%)      1.4610 (   6.00%)
Amean     5       2.5647 (   0.00%)      2.5183 (   1.81%)
Amean     7       3.7407 (   0.00%)      3.7120 (   0.77%)
Amean     12      5.9900 (   0.00%)      5.5233 (   7.79%)
Amean     18      8.8727 (   0.00%)      6.8353 *  22.96%*
Amean     24     11.1510 (   0.00%)      8.9123 *  20.08%*
Amean     30     13.9330 (   0.00%)     10.8743 *  21.95%*
Amean     32     14.2177 (   0.00%)     10.9923 *  22.69%*

Note that I consider the impact to be disproportionate and so it may not
be universal. On a profiled run for just *one* group, the difference in
perf profiles for atime-related functions were

     0.23%     -0.18%  [kernel.vmlinux]    [k] atime_needs_update
     0.13%     -0.02%  [kernel.vmlinux]    [k] touch_atime

So there is a large reduction in atime overhead which on this particular
machine must have gotten incrementally worse as the group count
increased. I could measure it specifically but I think it's reasonable
to reduce atime overhead for pseudo files unconditionally.

Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
---
 fs/inode.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/fs/inode.c b/fs/inode.c
index 72c4c347afb7..6d4ea0c9fe3e 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -930,6 +930,7 @@ struct inode *new_inode_pseudo(struct super_block *sb)
 	if (inode) {
 		spin_lock(&inode->i_lock);
 		inode->i_state = 0;
+		inode->i_flags |= S_NOATIME;
 		spin_unlock(&inode->i_lock);
 		INIT_LIST_HEAD(&inode->i_sb_list);
 	}



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux