Re: cephfs root_squash, multitenancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 13, 2015 at 3:35 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
> On Fri, 13 Feb 2015, Gregory Farnum wrote:
>> On Fri, Feb 13, 2015 at 5:05 AM, Sage Weil <sweil@xxxxxxxxxx> wrote:
>> > Got this from JJ:
>> >
>> >> The SA expanded on this by stating that there are basically three main
>> >> scenarios here:
>> >>
>> >> 1) We trust the UID/GID in a controlled environment. In which case we
>> >> can safely rely on the POSIX permissions. As long as root_squash is
>> >> available this would be fine.
>> >
>> > I think adding a root_squash option to client/Client.cc and the ceph.ko
>> > should be pretty easy...
>> >
>> >> 2) Multi-tenant systems. In these cases being able to create keyrings
>> >> which limit access to specified directories would be ideal.
>> >
>> > This is mainly enforcing uid/gid and mount path in MDSAuthCap, and not
>> > unlike what we'll need for OpenStack Manila.  I think it's something like
>> >
>> >  1- establish mount root inode ref when session is opened/authenticated
>> >  2- verify in reply path that any inode we reference is beneath that point
>> >  3- special case inodes in stray directory, hopefully in some secure-ish
>> > way based on where they lived previously.  (not sure how this'll work...)
>>
>> This concerns me:
>> 1) We have nothing to control access to raw objects, so there is some
>> security against accidents here, but not against malicious code users.
>
> Yeah.  There's no complete solution without both pieces, but one of them
> has to come first.
>
>> 2) Path verification is going to be tricky to handle with symlinks and
>> things. Moving it into the reply path as you discuss makes one set of
>> decisions on that (I'm not sure how flexibly?)
>
> Symlinks at least are easy: they are resolved on the client side in the
> VFS (above ceph-fuse/libcephfs or ceph.ko).  As far as the file system is
> concerned symlinks are just a special type of file with a max size of
> PATH_MAX.

Whoops, right. I meant hard links. :)

>
> Remote links will be the tricky ones.  It would be nice if this was a
> simple check in the rdlock_* etc helpers in Server.cc but for hard links
> it would be one step past that.  On the other hand, it may make sense
> that if there is a hard link you get to see the file... in which case the
> hard part will actually making sure that we allow it if the dentry is
> inside the allowed subtree.
>
> I said reply path before but now up-front checks in Server.cc seem more
> sensible.  :/ I need to spend some more time reading the code.

Up-front checks have been how we've more often proposed this, yes. I'd
just like a mechanism that isn't specialized for every op type — for
instance, probably if there's a hard link you should be able to see
the file. But then we need code in the handle_client_link function to
make sure clients don't create links to files they can't already
access. (The target is just a filepath encoded by the client.)

Hrm, actually, maybe that just happens automatically if we put the
permission-checking code in Server::rdlock_path_pin_ref and/or
MDCache::path_traverse. That sounds familiar...

I think I've discussed this before in the context of external
contributors or a blueprint:
https://wiki.ceph.com/Planning/Sideboard/Client_Security_for_CephFS
and https://wiki.ceph.com/Planning/CDS/Dumpling/Etherpad_Snapshots/1G%3A_Client_Security_for_CephFS
might be helpful to come up to speed on this topic, but I think
there's a more extensive discussion somewhere from early last summer.
:/

Up-front actually doesn't sound *too* hard to me, but that still
leaves the testing, which honestly scares me more.
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux