Re: MDS auth caps for cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 27, 2015 at 2:44 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
> On Tue, 26 May 2015, Gregory Farnum wrote:
>> >> >> Basically I'm still stuck on how any of this lets us lock a user into
>> >> >> a subtree while letting them do what they want within it. I'm not sure
>> >> >> how/if NFS solves that problem...
>> >> >
>> >> > That's easy:
>> >> >
>> >> >  # lock client into a dir
>> >> >  allow rw path /home/user
>> >> >
>> >> > Or, for a shared model:
>> >> >
>> >> >  # allow access to a project dir, as project or user gid
>> >> >  allow rw path /share/project uid 123 gids 123,1000
>> >>
>> >> I think I'm forgetting how the parsing works. Is that all one specific
>> >> allow stanza that is evaluated as a unit? Ie, to match it an operation
>> >> must fall within /share/project and be sent by a user with either UID
>> >> 123 or GID{123,1000}? You're right, that works.
>> >
>> > Right.  Each 'allow' stanza is a set of conditions.  If they are true,
>> > then we allow.
>>
>> Okay, I think we're on the same page here.
>
> Yay!
>
> Now, let's see if I can throw us off again...
>
> I was just talking to Simo about the longer-term kerberos auth goals to
> make sure we don't do something stupid here that we regret later.  His
> feedback boils down to:
>
>  1) Don't bother with root squash since it doesn't buy you much, and
>  2) Never let the client construct the credential--do it on the server.
>
> I'm okay with skipping squash_root (although it's simple enough it might
> be worthwhile anyway)

Oh, I like skipping it, given the syntax and usability problems we went over. ;)

> but #2 is a bit different than what I was thinking.
> Specifically, this is about tagging requests with the uid + gid list.  If
> you let the client provide the group membership you lose most of the
> security--this is what NFS did and it sucked.  (There were other problems
> too, like a limit of 16 gids, and/or problems when a windows admin in 4000
> groups comes along.)

I'm not sure I understand this bit. I thought we were planning to have
gids in the cephx caps, and then have the client construct the list it
thinks is appropriate for each given request?
Obviously that trusts the client *some*, but it sandboxes them in and
I'm not sure the trust is a useful extension as long as we make sure
the UID and GID sets go together from the cephx caps.

>
> The idea we ended up on was to have a plugin interface on the MDS do to
> the credential -> uid + gid list mapping.  For simplicity, our initial
> "credential id" can just be a uid.  And the plugin interface would be
> something like
>
>  int resolve_credential(bufferlist cred, uid_t *uid, vector<gid_t> *gidls);
>
> with plugins that do various trivial things, like
>
>  - cred = uid, assume we are in one group with gid == uid
>  - cred = uid, resolve groups from local machine (where ceph-mds
> is running)
>  - cred = uid, resolve groups from explicitly named passwd/group files
>
> and later we'd add plugins to query LDAP, parse a kerberos
> credential, or parse the MS-PAC thing from kerberos.
>
> The target environments would be:
>
> 1) trusted, no auth, keep doing what we do now (trust the client and check
> nothing at the mds)
>
>   allow any
>
> 2) semi-trusted client.  Use cap like
>
>   allow rw
>
> but check client requests at MDS by resolving credentials and verifying
> unix permissions/ACLs.  (This will use the above call-out to do the uid ->
> gid translation.)
>
> 3) per-client trust.  Use caps like
>
>   allow rw uid 123 gids 123,1000
>
> so that a given host is locked as a single user (or maybe a small list of
> users).  Or,
>
>   allow rw path /foo uid 123 gids 123
>
> etc.
>
> 4) untrusted client.  Use kerberos.  Use caps like
>
>   allow rw kerberos_domain=FOO.COM
>
> and do all the fancypants stuff to get per-user tickets from clients,
> resolve them to groups, and enforce things on the server.  This one is
> still hand-wavey since we haven't defined the protocol etc.
>
> I think we can get 1-3 without too much trouble!  The main question for me
> right now is how we define teh credential we tag requests and cap
> writeback with.  Maybe something simple like
>
> struct ceph_cred_handle {
>         enum { NONE, UID, OTHER } type;
>         uint64_t id;
> };
>
> For now we just stuff the uid into id.  For kerberos, we'll put some
> cookie in there that came from a previous exchange where we passed the
> kerberos ticket to the MDS and got an id.  (The ticket may be big--we
> don't want to attach it to each request.)

Okay, so we want to do a lot more than in-cephx uid and gid
permissions granting? These look depressingly
integration-intensive-difficult but not terribly complicated
internally. I'd kind of like the interface to not imply we're doing
external callouts on every MDS op, though!
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux