Re: MDS auth caps for cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 27, 2015 at 3:21 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
> On Wed, 27 May 2015, Gregory Farnum wrote:
>> > I was just talking to Simo about the longer-term kerberos auth goals to
>> > make sure we don't do something stupid here that we regret later.  His
>> > feedback boils down to:
>> >
>> >  1) Don't bother with root squash since it doesn't buy you much, and
>> >  2) Never let the client construct the credential--do it on the server.
>> >
>> > I'm okay with skipping squash_root (although it's simple enough it might
>> > be worthwhile anyway)
>>
>> Oh, I like skipping it, given the syntax and usability problems we went over. ;)
>>
>> > but #2 is a bit different than what I was thinking.
>> > Specifically, this is about tagging requests with the uid + gid list.  If
>> > you let the client provide the group membership you lose most of the
>> > security--this is what NFS did and it sucked.  (There were other problems
>> > too, like a limit of 16 gids, and/or problems when a windows admin in 4000
>> > groups comes along.)
>>
>> I'm not sure I understand this bit. I thought we were planning to have
>> gids in the cephx caps, and then have the client construct the list it
>> thinks is appropriate for each given request?
>> Obviously that trusts the client *some*, but it sandboxes them in and
>> I'm not sure the trust is a useful extension as long as we make sure
>> the UID and GID sets go together from the cephx caps.
>
> We went around in circles about this for a while, but in the end I think
> we agreed there is minimal value from having the client construct anything
> (the gid list in this case), and it avoids taking any step down what is
> ultimately a dead-end road.  For example, caps like
>
>   allow rw gid 2000
>
> are useless since the client can set gid=2000 but then make the request
> uid anything it wants (namely, the file owner).  Cutting the client out of
> the picture also avoids the many-gid issue.

I don't think I understand the threat model we're worried about here.
(Granted a cap that sets gid but not uid sounds like a bad idea to
me.) But if the cephx caps include the GID then a client can only use
weaker ones than they're permitted, which could frequently be correct.
For instance if each tenant in a multitenant system has a single cephx
key, but they have both admin and non-admin users within their local
context?

>  The trade-off is that if you
> want stronger auth you need to teach the MDS how to do those mappings.
>
> We need to make sure we can make this sane in a multi-namespace
> environment, e.g., where we have different cloud tenants in different
> paths.  Would we want to specify different uid->gid mappings for those?
> Maybe we actually want a cap like
>
>  allow rw path=/foo uidgidns=foo
>
> or something so that another tenant could have
>
>  allow rw path=/foo uidgidns=bar
>
> Or, we can just say that you get either
>
>  - a global uid->gid mapping, server-side enforcement, and allow based on
> uid;
>  - same as above, but also with a path restriction; or
>  - path restriction, and no server-side uid/gid permission/acl checks

Yes, this multi-namespace environment was what I was touching on in
some of my more confusing asides earlier. I think we need to survey
more operators about what they'd want here before making any
decisions, because I just don't understand the tradeoffs from their
perspective. (Is that list of 3 choices going to be a problem for
anybody? It's certainly the *easiest* to implement, and UID namespaces
within a single hierarchy sound like a bit of a nightmare both to
implement and administer...at that point maybe we're better off with
just multiple separate hierarchies.)

>
>> > The idea we ended up on was to have a plugin interface on the MDS do to
>> > the credential -> uid + gid list mapping.  For simplicity, our initial
>> > "credential id" can just be a uid.  And the plugin interface would be
>> > something like
>> >
>> >  int resolve_credential(bufferlist cred, uid_t *uid, vector<gid_t> *gidls);
>> >
>> > with plugins that do various trivial things, like
>> >
>> >  - cred = uid, assume we are in one group with gid == uid
>> >  - cred = uid, resolve groups from local machine (where ceph-mds
>> > is running)
>> >  - cred = uid, resolve groups from explicitly named passwd/group files
>> >
>> > and later we'd add plugins to query LDAP, parse a kerberos
>> > credential, or parse the MS-PAC thing from kerberos.
>> >
>> > The target environments would be:
>> >
>> > 1) trusted, no auth, keep doing what we do now (trust the client and check
>> > nothing at the mds)
>> >
>> >   allow any
>> >
>> > 2) semi-trusted client.  Use cap like
>> >
>> >   allow rw
>> >
>> > but check client requests at MDS by resolving credentials and verifying
>> > unix permissions/ACLs.  (This will use the above call-out to do the uid ->
>> > gid translation.)
>> >
>> > 3) per-client trust.  Use caps like
>> >
>> >   allow rw uid 123 gids 123,1000
>> >
>> > so that a given host is locked as a single user (or maybe a small list of
>> > users).  Or,
>> >
>> >   allow rw path /foo uid 123 gids 123
>> >
>> > etc.
>> >
>> > 4) untrusted client.  Use kerberos.  Use caps like
>> >
>> >   allow rw kerberos_domain=FOO.COM
>> >
>> > and do all the fancypants stuff to get per-user tickets from clients,
>> > resolve them to groups, and enforce things on the server.  This one is
>> > still hand-wavey since we haven't defined the protocol etc.
>> >
>> > I think we can get 1-3 without too much trouble!  The main question for me
>> > right now is how we define teh credential we tag requests and cap
>> > writeback with.  Maybe something simple like
>> >
>> > struct ceph_cred_handle {
>> >         enum { NONE, UID, OTHER } type;
>> >         uint64_t id;
>> > };
>> >
>> > For now we just stuff the uid into id.  For kerberos, we'll put some
>> > cookie in there that came from a previous exchange where we passed the
>> > kerberos ticket to the MDS and got an id.  (The ticket may be big--we
>> > don't want to attach it to each request.)
>>
>> Okay, so we want to do a lot more than in-cephx uid and gid
>> permissions granting? These look depressingly
>> integration-intensive-difficult but not terribly complicated
>> internally. I'd kind of like the interface to not imply we're doing
>> external callouts on every MDS op, though!
>
> We'd probably need to allow it to be async (return EAGAIN) or something.
> Some cases will hit a cache or be trivial and non-blocking, but others
> will need to do an upcall to some slow network service.  Maybe
>
>   int resolve_credential(bufferlist cred, uid_t *uid, vector<gid_t>
>      *gidls, Context *onfinish);
>
> where r == 0 means we did it, and r == -EAGAIN means we will call onfinish
> when the result is ready.  Or some similar construct that let's avoid a
> spurious Context alloc+free in the fast path.

Mmm. "slow network service" scares me. I presume you're thinking here
that this is a per-session request, not a per-operation one? If we're
going to include external security systems we probably need to let
them get a say on every request but it very much needs to be local
data only for those.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux