Re: ceph-mds infrastructure for fscrypt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2021-04-29 at 16:46 -0700, Patrick Donnelly wrote:
> Hi Jeff,
> 
> On Thu, Apr 22, 2021 at 11:19 AM Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> > At this point, I'm thinking it might be best to unify all of the
> > per-inode info into a single field that the MDS would treat as opaque.
> > Note that the alternate_names feature would remain more or less
> > untouched since it's associated more with dentries than inodes.
> > 
> > The initial version of this field would look something like this:
> > 
> > struct ceph_fscrypt_context {
> >         u8                              version;        // == 1
> >         struct fscrypt_context_v2       fscrypt_ctx;    // 40 bytes
> >         __le32                          blocksize       // 4k for now
> >         __le64                          size;           // "real"
> > i_size
> > };
> > 
> > The MDS would send this along with any size updates (InodeStat, and
> > MClientCaps replies). The client would need to send this in cap
> > flushes/updates, and we'd also need to extend the SETATTR op too, so the
> > client can update this field in truncates (at least).
> > 
> > I don't look forward to having to plumb this into all of the different
> > client ops that can create inodes though. What I'm thinking we might
> > want to do is expose this field as the "ceph.fscrypt" vxattr.
> 
> I think the process for adding the fscrypt bits to the MClientRequest
> will be the same as adding alternate_name? In the
> ceph_mds_request_head payload. I don't like the idea of stuffing this
> data in the xattr map.
> 

That does sound considerably less hacky. I'll look into doing that.

> > The client can stuff that into the xattr blob when creating a new inode,
> > and the MDS can scrape it out of that and move the data into the correct
> > field in the inode. A setxattr on this field would update the new field
> > too. It's an ugly interface, but shouldn't be too bad to handle and we
> > have some precedent for this sort of thing.
> > 
> > The rules for handling the new field in the client would be a bit weird
> > though. We'll need to allow it to reading the fscrypt_ctx part without
> > any caps (since that should be static once it's set), but the size
> > handling needs to be under the same caps as the traditional size field
> > (Is that Fsx? The rules for this are never quite clear to me.)
> > 
> > Would it be better to have two different fields here -- fscrypt_auth and
> > fscrypt_file? Or maybe, fscrypt_static/_dynamic? We don't necessarily
> > need to keep all of this info together, but it seemed neater that way.
> 
> I'm not seeing a reason to split the struct.
> 

What caps should this live under? We have different requirements for
different parts of the struct.

1) fscrypt context: needs to be always available, especially when an
inode is initially instantiated, though it should almost always be
static once it's set. The exception is that an empty directory can grow
a new context when it's first encrypted, and we'll want other clients to
pick up on this change when it occurs.

2) "real" size: needs to be under Fwx, I think (though I need to look
more closely at the truncation path to be sure).

...and that's not even considering what rules we might want in the
future for other info we stuff into here. Given that the MDS needs to
treat this as opaque, what locks/caps should cover this new field?

> > Thoughts? Opinions? Is this a horrible idea? What would be better?
> > 
> > Thanks,
> > --
> > Jeff Layton <jlayton@xxxxxxxxxx>
> > 
> > [1]: latest draft was posted here:
> > https://lore.kernel.org/ceph-devel/53d5bebb28c1e0cd354a336a56bf103d5e3a6344.camel@xxxxxxxxxx/T/#t
> > [2]: https://github.com/ceph/ceph/pull/37297
> > [3]:
> > https://github.com/ceph/ceph/commit/7fe1c57846a42443f0258fd877d7166f33fd596f
> > [4]:
> > https://lore.kernel.org/ceph-devel/53d5bebb28c1e0cd354a336a56bf103d5e3a6344.camel@xxxxxxxxxx/T/#m0f7bbed6280623d761b8b4e70671ed568535d7fa
> > 
> > 
> 
> 

-- 
Jeff Layton <jlayton@xxxxxxxxxx>
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux