Re: A few questions and remarks about cephx

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Sep 6, 2015 at 10:07 AM, Marin Bernard <lists@xxxxxxxxxxxx> wrote:
> Hi,
>
> I've just setup Ceph Hammer (latest version) on a single node (1 MON, 1
> MDS, 4 OSDs) for testing purposes. I used ceph-deploy. I only
> configured CephFS as I don't use RBD. My pool config is as follows:
>
> $ sudo ceph df
> GLOBAL:
>     SIZE      AVAIL     RAW USED     %RAW USED
>     7428G     7258G         169G          2.29
> POOLS:
>     NAME                ID     USED       %USED     MAX AVAIL
>  OBJECTS
>     cephfs_data         1        168G      2.26         7209G
>  78691
>     cephfs_metadata     2      41301k         0         7209G
>  2525
>
> Cluster is sane:
>
> $ sudo ceph status
>     cluster 72aba9bb-20db-4f62-8d03-0a8a1019effa
>      health HEALTH_OK
>      monmap e1: 1 mons at {nice-srv-cosd-00=10.16.1.161:6789/0}
>             election epoch 1, quorum 0 nice-srv-cosd-00
>      mdsmap e5: 1/1/1 up {0=nice-srv-cosd-00=up:active}
>      osdmap e71: 4 osds: 4 up, 4 in
>       pgmap v3723: 240 pgs, 2 pools, 167 GB data, 80969 objects
>             168 GB used, 7259 GB / 7428 GB avail
>                  240 active+clean
>   client io 59391 kB/s wr, 29 op/s
>
> CephFS is mounted on a client node, which uses a dedicated cephx key
> 'client.mynode'. I've had a hard time trying to figure out which cephx
>  capabilities were required to give the node RW access to CephFS. I
> found documentation covering cephx capabilities for RBD, but not for
> CephFS. Did I miss something ? As of now, the 'client.mynode' key has
> the following capabilities, which seem sufficient:

CephFS is still not as well documented since nobody's building a
product on it yet.

>
> $ sudo ceph auth get client.mynode
> exported keyring for client.mynode
> [client.mynode]
>         key = myBeautifulKey
>         caps mds = "allow r"
>         caps mon = "allow r"
>         caps osd = "allow rw pool=cephfs_metadata, allow rw
> pool=cephfs_data"

The clients don't need to access the metadata pool at all; only the
MDSes need access to that.

>
>
> Here are a few questions and remarks I made for myself when dealing
> with cephx:
>
> 1. Are mds caps needed for CephFS clients? If so, do they need r or rw
> access ? Is it documented somewhere ?

I think this just needs an "allow" in all released versions, although
we're making the language more flexible for Infernalis. (At least, we
hope https://github.com/ceph/ceph/pull/5638/ merges for Infernalis!)
It may not be documented well, but it's at least at
http://ceph.com/docs/master/rados/operations/user-management/#authorization-capabilities

> 2. CephFS requires the clients to have rw access to multiple pools
> (data + metadata). I couldn't find the correct syntax to use with 'ceph
> auth caps' anywhere but on the ML archive (
> https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg17058.html).
> I suggest to add some documentation for it on the main website. Or is
> it already there ?

Actually, clients just need to access whichever data pools they're
using. I thought we had documentation for multiple pools but I can't
find it; you should submit a bug! :)

>
>
> 3. I found 'ceph auth caps' syntax validation rather weak, as the
> command did not return an error in the case of an incorrect syntax. For
> instance, the following command did not raise an error whereas it is
> (probably) syntactically incorrect:
>
> $ sudo ceph auth caps client.mynode mon 'allow r' mds 'allow r' osd
> 'allow rw pool=cephfs_metadata,cephfs_data'
>
> I suppose the comma is considered as a part of a single pool name, thus
> resulting in:
>
> $ sudo ceph auth get client.mynode
> exported keyring for client.mynode
> [cl
> ient.mynode]
>         key = myBeautifulKey
>         caps mds = "allow r"
>
> caps mon = "allow r"
>         caps osd = "allow rw
> pool=cephfs_metadata,cephfs_data"
>
> Is it expected behaviour? Are special chars allowed in pool names ?

We've waffled on doing validation or not (cap syntax is validated by
the daemons using it, not the monitors, and we want to keep it
flexible in case eg the monitors are still being upgraded but you're
using new-style syntax).

>
>
> 4. With the capabilities shown above, the client node was still able to
> mount CephFS and to make thousands of reads and writes without any
> error. However, since capabilities were incorrect, it only had rw
> access to the 'cephfs_metadata' pool, and no access at all to the
> 'cephfs_data' pool. As a consequence, files, folders, permissions,
> sizes and other metadata were written and retrieved correctly, but the
> actual data were lost in vacuum. Shouldn't such a strange situation
> raise an error on the client ?

If you use a new enough (hammer, maybe? otherwise Infernalis)
ceph-fuse it will raise an error. I'm not sure if it's in the kernel
client but if not it will be soon, but of course you're unlikely to be
using one that's new enough yet.
-Greg

>
>
> Thanks!
>
> Marin.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux