A few questions and remarks about cephx

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I've just setup Ceph Hammer (latest version) on a single node (1 MON, 1
MDS, 4 OSDs) for testing purposes. I used ceph-deploy. I only
configured CephFS as I don't use RBD. My pool config is as follows:

$ sudo ceph df
GLOBAL:
    SIZE      AVAIL     RAW USED     %RAW USED 
    7428G     7258G         169G          2.29 
POOLS:
    NAME                ID     USED       %USED     MAX AVAIL    
 OBJECTS 
    cephfs_data         1        168G      2.26         7209G      
 78691 
    cephfs_metadata     2      41301k         0         7209G       
 2525

Cluster is sane:

$ sudo ceph status
    cluster 72aba9bb-20db-4f62-8d03-0a8a1019effa
     health HEALTH_OK
     monmap e1: 1 mons at {nice-srv-cosd-00=10.16.1.161:6789/0}
            election epoch 1, quorum 0 nice-srv-cosd-00
     mdsmap e5: 1/1/1 up {0=nice-srv-cosd-00=up:active}
     osdmap e71: 4 osds: 4 up, 4 in
      pgmap v3723: 240 pgs, 2 pools, 167 GB data, 80969 objects
            168 GB used, 7259 GB / 7428 GB avail
                 240 active+clean
  client io 59391 kB/s wr, 29 op/s

CephFS is mounted on a client node, which uses a dedicated cephx key
'client.mynode'. I've had a hard time trying to figure out which cephx
 capabilities were required to give the node RW access to CephFS. I
found documentation covering cephx capabilities for RBD, but not for
CephFS. Did I miss something ? As of now, the 'client.mynode' key has
the following capabilities, which seem sufficient:

$ sudo ceph auth get client.mynode
exported keyring for client.mynode
[client.mynode]
	key = myBeautifulKey
	caps mds = "allow r"
	caps mon = "allow r"
	caps osd = "allow rw pool=cephfs_metadata, allow rw
pool=cephfs_data"


Here are a few questions and remarks I made for myself when dealing
with cephx:

1. Are mds caps needed for CephFS clients? If so, do they need r or rw
access ? Is it documented somewhere ?


2. CephFS requires the clients to have rw access to multiple pools
(data + metadata). I couldn't find the correct syntax to use with 'ceph
auth caps' anywhere but on the ML archive (
https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg17058.html).
I suggest to add some documentation for it on the main website. Or is
it already there ?


3. I found 'ceph auth caps' syntax validation rather weak, as the
command did not return an error in the case of an incorrect syntax. For
instance, the following command did not raise an error whereas it is
(probably) syntactically incorrect:

$ sudo ceph auth caps client.mynode mon 'allow r' mds 'allow r' osd
'allow rw pool=cephfs_metadata,cephfs_data'

I suppose the comma is considered as a part of a single pool name, thus
resulting in:

$ sudo ceph auth get client.mynode
exported keyring for client.mynode
[cl
ient.mynode]
	key = myBeautifulKey
	caps mds = "allow r"
	
caps mon = "allow r"
	caps osd = "allow rw
pool=cephfs_metadata,cephfs_data"

Is it expected behaviour? Are special chars allowed in pool names ?


4. With the capabilities shown above, the client node was still able to
mount CephFS and to make thousands of reads and writes without any
error. However, since capabilities were incorrect, it only had rw
access to the 'cephfs_metadata' pool, and no access at all to the
'cephfs_data' pool. As a consequence, files, folders, permissions,
sizes and other metadata were written and retrieved correctly, but the
actual data were lost in vacuum. Shouldn't such a strange situation
raise an error on the client ?


Thanks!

Marin.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux