Re: [PATCH] kernfs: attach uuid for every kernfs and report it in fsid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 10, 2023 at 11:33:38AM -0700, Ivan Babrou wrote:
> The following two commits added the same thing for tmpfs:
> 
> * commit 2b4db79618ad ("tmpfs: generate random sb->s_uuid")
> * commit 59cda49ecf6c ("shmem: allow reporting fanotify events with file handles on tmpfs")
> 
> Having fsid allows using fanotify, which is especially handy for cgroups,
> where one might be interested in knowing when they are created or removed.
> 
> Signed-off-by: Ivan Babrou <ivan@xxxxxxxxxxxxxx>
> ---
>  fs/kernfs/mount.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c
> index d49606accb07..930026842359 100644
> --- a/fs/kernfs/mount.c
> +++ b/fs/kernfs/mount.c
> @@ -16,6 +16,8 @@
>  #include <linux/namei.h>
>  #include <linux/seq_file.h>
>  #include <linux/exportfs.h>
> +#include <linux/uuid.h>
> +#include <linux/statfs.h>
>  
>  #include "kernfs-internal.h"
>  
> @@ -45,8 +47,15 @@ static int kernfs_sop_show_path(struct seq_file *sf, struct dentry *dentry)
>  	return 0;
>  }
>  
> +int kernfs_statfs(struct dentry *dentry, struct kstatfs *buf)
> +{
> +	simple_statfs(dentry, buf);
> +	buf->f_fsid = uuid_to_fsid(dentry->d_sb->s_uuid.b);
> +	return 0;
> +}
> +
>  const struct super_operations kernfs_sops = {
> -	.statfs		= simple_statfs,
> +	.statfs		= kernfs_statfs,
>  	.drop_inode	= generic_delete_inode,
>  	.evict_inode	= kernfs_evict_inode,
>  
> @@ -351,6 +360,8 @@ int kernfs_get_tree(struct fs_context *fc)
>  		}
>  		sb->s_flags |= SB_ACTIVE;
>  
> +		uuid_gen(&sb->s_uuid);

Since kernfs has as lot of nodes (like hundreds of thousands if not more
at times, being created at boot time), did you just slow down creating
them all, and increase the memory usage in a measurable way?

We were trying to slim things down, what userspace tools need this
change?  Who is going to use it, and what for?

There were some benchmarks people were doing with booting large memory
systems that you might want to reproduce here to verify that nothing is
going to be harmed.

thanks,

greg k-h



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux