Re: MDS auth caps for cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 28 May 2015, Gregory Farnum wrote:
> On Thu, May 28, 2015 at 9:20 AM, Robert LeBlanc <robert@xxxxxxxxxxxxx> wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA256
> >
> > I've been trying to follow this and I've been lost many times, but I'd
> > like to put in my $0.02.  In my mind any multi-tenant system that
> > relies on the client to specify UID/GID as authoritative is
> > fundamentally flawed. The server needs to be authoritative with access
> > or I would not trust it in a muti-tenant environment.
> >
> > My take is have the User key (generated by the Ceph admin) specify the
> > CephFS directory|directories the key can access and the rwx
> > permissions for the directory|directories and then leave it up to the
> > tenant to handle the UID/GID allocation and the synchronization
> > between their hosts.
> 
> Right, this is basically what we're planning. The sticky bits are about
> 1) dealing with clients that have access to multiple UIDs/GIDs
> (because different end users are on the same host, for instance). :)
> 2) dealing with "public cloud"-like scenarios, where you have a bunch
> of tenants who are all root on their own machines and thus control
> their UID space. (Right now we can't put multiple CephFS instances in
> a single RADOS cluster, so the only obvious way to support this is by
> giving each client their own subspace within the unified hierarchy.)

Yep!

> > Some tenants may want just local UID/GID
> > management, others may want LDAP, Kerberos, etc. I believe Ceph should
> > only be worried about "share" permissions and leave "file" permissions
> > to the tenant. Ceph just needs the ability to store UID/GID and POSIX
> > ACLs.
> 
> Well that doesn't quite work ? it's entirely possible you want to
> share read-only files with a bunch of people that shouldn't be allowed
> to write them; that lack of write ability needs to be enforced by Ceph
> at the server layer!

I think with what we're proposing you can still do this.  You'd use ceph 
capabilities that lock mounts into subtrees and do nothing else.  Ceph 
can continue to store uid/gids/acls but not interpret them.

The extra complexity we're talking about would kick in if you *do* want to 
share the same subtrees across users and want CephFS to enforce unix 
permissions server-side.  That's important for some users.  And it's great 
to hear that it's not important for lots of others!

> > The MDS could combine a tenant ID and a UID/GID to store unique
> > UID/GIDs on the back end and just strip off the tenant ID when
> > presented to the client so there are no collisions of UID/GIDs between
> > tenants in the MDS.
> 
> Hmm, that is another thought...

Unless you ask Ceph to enforce the unix permissions server side, the 
uid/gid are stored but not interpreted.  I don't think the tenant ID is 
needed since there is no impact if the same uids are used in different 
subtrees.  It's just up to the admin to divvy up non-overlapping subtrees 
to the tenants...

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux