Re: CephFS with cache-tier kernel-mount client unable to write (Nautilus)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, Ilya.

I just tried modifying the osd cap for client.testuser by getting rid of "tag cephfs data="" part and confirmed this key does work (i.e. lets the CephFS client read/write).  It now reads:

[client.testuser]
key = XXXYYYYZZZ
caps mds = "allow rw"
caps mon = "allow r"
caps osd = "allow rw"  // previously "allow rw tag cephfs data="">

I tried removing either "tag cephfs" or "data="" (and leaving the other), but neither worked.  

Now, here is my question: will not having the "allow rw tag cephfs data="" system name>" (under osd caps) result in a security/privacy loophole in a production cluster?   (I am still trying to assess whether having a Cache Tier behind CephFS is worth all the headaches...)

Mami Hayashida
Research Computing Associate
Univ. of Kentucky ITS Research Computing Infrastructure



On Tue, Jan 21, 2020 at 2:21 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
On Tue, Jan 21, 2020 at 7:51 PM Hayashida, Mami <mami.hayashida@xxxxxxx> wrote:
>
> Ilya,
>
> Thank you for your suggestions!
>
> `dmsg` (on the client node) only had `libceph: mon0 10.33.70.222:6789 socket error on write`.  No further detail.  But using the admin key (client.admin) for mounting CephFS solved my problem.  I was able to write successfully! :-)
>
> $ sudo mount -t ceph 10.33.70.222:6789:/  /mnt/cephfs -o name=admin,secretfile=/etc/ceph/fsclient_secret     // with the corresponding client.admin key
>
> $ sudo vim /mnt/cephfs/file4
> $ sudo ls -l /mnt/cephfs
> total 1
> -rw-r--r-- 1 root root  0 Jan 21 16:25 file1
> -rw-r--r-- 1 root root  0 Jan 21 16:45 file2
> -rw-r--r-- 1 root root  0 Jan 21 18:35 file3
> -rw-r--r-- 1 root root 22 Jan 21 18:42 file4
>
> Now, here is the difference between the two keys. client.testuser was obviously generated with the command `ceph fs authorize cephfs_test client.testuser / rw`, but something in there is obviously interfering with CephFS with a Cache Tier pool.  Do I need to edit the `tag` or the `data` part?  Now, I should mention the same type of key (like client.testuser) worked just fine when I was testing CephFS without a Cache Tier pool.
>
> client.admin
> key: XXXYYYYZZZ
> caps: [mds] allow *
> caps: [mgr] allow *
> caps: [mon] allow *
> caps: [osd] allow *
>
> client.testuser
> key: XXXYYYYZZZ
> caps: [mds] allow rw
> caps: [mon] allow r
> caps: [osd] allow rw tag cephfs data="">
Right.  I think this is because with cache tiering you have two data
pools involved, but "ceph fs authorize" generates an OSD cap that ends
up restricting the client to the data pool that that the filesystem
"knows" about.

You will probably need to create your client users by hand instead of
generating them with "ceph fs authorize".  CCing Patrick who might know
more.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux