Re: [RFC PATCH v7 06/24] ceph: parse new fscrypt_auth and fscrypt_file fields in inode traces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 7/7/21 10:56 PM, Luis Henriques wrote:
On Wed, Jul 07, 2021 at 03:32:13PM +0100, Luis Henriques wrote:
On Wed, Jul 07, 2021 at 08:19:25AM -0400, Jeff Layton wrote:
On Wed, 2021-07-07 at 19:19 +0800, Xiubo Li wrote:
On 7/7/21 6:47 PM, Luis Henriques wrote:
On Fri, Jun 25, 2021 at 09:58:16AM -0400, Jeff Layton wrote:
...and store them in the ceph_inode_info.

Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
---
   fs/ceph/file.c       |  2 ++
   fs/ceph/inode.c      | 18 ++++++++++++++++++
   fs/ceph/mds_client.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
   fs/ceph/mds_client.h |  4 ++++
   fs/ceph/super.h      |  6 ++++++
   5 files changed, 74 insertions(+)

diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 2cda398ba64d..ea0e85075b7b 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -592,6 +592,8 @@ static int ceph_finish_async_create(struct inode *dir, struct inode *inode,
   	iinfo.xattr_data = xattr_buf;
   	memset(iinfo.xattr_data, 0, iinfo.xattr_len);
+ /* FIXME: set fscrypt_auth and fscrypt_file */
+
   	in.ino = cpu_to_le64(vino.ino);
   	in.snapid = cpu_to_le64(CEPH_NOSNAP);
   	in.version = cpu_to_le64(1);	// ???
diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
index f62785e4dbcb..b620281ea65b 100644
--- a/fs/ceph/inode.c
+++ b/fs/ceph/inode.c
@@ -611,6 +611,13 @@ struct inode *ceph_alloc_inode(struct super_block *sb)
ci->i_meta_err = 0; +#ifdef CONFIG_FS_ENCRYPTION
+	ci->fscrypt_auth = NULL;
+	ci->fscrypt_auth_len = 0;
+	ci->fscrypt_file = NULL;
+	ci->fscrypt_file_len = 0;
+#endif
+
   	return &ci->vfs_inode;
   }
@@ -619,6 +626,9 @@ void ceph_free_inode(struct inode *inode)
   	struct ceph_inode_info *ci = ceph_inode(inode);
kfree(ci->i_symlink);
+#ifdef CONFIG_FS_ENCRYPTION
+	kfree(ci->fscrypt_auth);
+#endif
   	kmem_cache_free(ceph_inode_cachep, ci);
   }
@@ -1021,6 +1031,14 @@ int ceph_fill_inode(struct inode *inode, struct page *locked_page,
   		xattr_blob = NULL;
   	}
+ if (iinfo->fscrypt_auth_len && !ci->fscrypt_auth) {
+		ci->fscrypt_auth_len = iinfo->fscrypt_auth_len;
+		ci->fscrypt_auth = iinfo->fscrypt_auth;
+		iinfo->fscrypt_auth = NULL;
+		iinfo->fscrypt_auth_len = 0;
+		inode_set_flags(inode, S_ENCRYPTED, S_ENCRYPTED);
+	}
I think we also need to free iinfo->fscrypt_auth here if ci->fscrypt_auth
is already set.  Something like:

	if (iinfo->fscrypt_auth_len) {
		if (!ci->fscrypt_auth) {
			...
		} else {
			kfree(iinfo->fscrypt_auth);
			iinfo->fscrypt_auth = NULL;
		}
	}

IMO, this should be okay because it will be freed in
destroy_reply_info() when putting the request.


Yes. All of that should get cleaned up with the request.
Hmm... ok, so maybe I missed something because I *did* saw kmemleak
complaining.  Maybe it was on the READDIR path.  /me goes look again.
Ah, that was indeed the problem.  So, here's a quick hack to fix
destroy_reply_info() so that it also frees the extra memory from READDIR:

@@ -686,12 +686,23 @@ static int parse_reply_info(struct ceph_mds_session *s, struct ceph_msg *msg,
static void destroy_reply_info(struct ceph_mds_reply_info_parsed *info)
  {
+	int i = 0;
+
  	kfree(info->diri.fscrypt_auth);
  	kfree(info->diri.fscrypt_file);
  	kfree(info->targeti.fscrypt_auth);
  	kfree(info->targeti.fscrypt_file);
  	if (!info->dir_entries)
  		return;
+
+	for (i = 0; i < info->dir_nr; i++) {
+		struct ceph_mds_reply_dir_entry *rde = info->dir_entries + i;
+		if (rde->inode.fscrypt_auth_len)
+			kfree(rde->inode.fscrypt_auth);
+	}
+	
  	free_pages((unsigned long)info->dir_entries, get_order(info->dir_buf_size));
  }

Yeah, this looks nice.


Cheers,
--
Luís





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux