Re: Cephfs: one ceph account per directory?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 4, 2015 at 7:25 AM, François Lafont <flafdivers@xxxxxxx> wrote:
> Hi,
>
> A Hammer cluster can provide only one Cephfs and my problem is about
> security.
> Currently, if I want to share a Cephfs for 2 nodes foo-1 and foo-2 and
> another Cephfs for
> 2 another nodes bar-1 and bar-2, I just mount a dedicated directory in
> foo-1/foo-2 and
> another dedicated directory in bar-1/bar-2. For instance, I put this line in
> the /etc/fstab
> of foo-1 and foo-2:
>
>     mon-1,mon-2,mon-3:/foo   /mnt   ceph
> noatime,name=cephfs-account,secretfile=/etc/ceph/secret
>
> And I put this line in the /etc/fstab of bar-1 and bar-2:
>
>     mon-1,mon-2,mon-3:/bar   /mnt   ceph
> noatime,name=cephfs-account,secretfile=/etc/ceph/secret
>
> But as you can see, I use the same ceph account in foo-{1,2} and in
> bar-{1,2}.
> So, for instance, if foo-1 is compromised because a bad person is root on
> this server,
> the bad person can remove the content of /foo in Cephfs (ok, it's normal)
> but the
> bad person can change the line in fstab to have:
>
>     mon-1,mon-2,mon-3:/   /mnt   ceph
> noatime,name=cephfs-account,secretfile=/etc/ceph/secret
>
> and he can remove too the content of /bar/ in Cephfs (which is less
> acceptable ;)).
>
> 1. Can you confirm to me that currently it's impossible to restrict the read
> and
> write access of a ceph account to a specific directory of a cephfs?

It's sadly impossible to restrict access to the filesystem hierarchy
at this time, yes. By making use of the file layouts and assigning
each user their own pool you can restrict access to the actual file
data.

>
> 2. Is it planned to implement a such feature in a next release of Ceph?

There are a couple students working on these features this summer, and
many discussions amongst the core team about how to enable secure
multi-tenancy in CephFS.

>
> 3. Do you have workarounds to solve my problem of security? Of course, a
> solution could be
> to install 2 different Ceph clusters with each its owned Cephfs but I can't
> (because it involves
> to install new daemons monitors, mds etc. and this this not possible for
> me).

Just the file layout/multiple-pool one, right now. Or you could do
something like set up an NFS export that each user mounts of the
CephFS, but then you lose all the CephFS goodness on the clients...
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux