Re: CephFS with cache-tier kernel-mount client unable to write (Nautilus)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, Ilya.  

First, I was not sure whether to post my question on @ceph.io or @lists.ceph.com (I subscribe to both) -- should I use @ceph.io in the future?

Second, thanks for your advice on cache-tiering -- I was starting to feel that way but always good to know what Ceph "experts" would say.

Third, I tried enabling (and setting) the pool application commands you outlined but got errors (Ceph is not allowing me to enable/set application on the cache tier)

$ ceph osd pool application enable cephfs-data-cache cephfs
Error EINVAL: application must be enabled on base tier
 $ ceph osd pool application set cephfs-data-cache cephfs data cephfs_test
Error EINVAL: application metadata must be set on base tier

Since at this point, it is highly unlikely that we will be utilizing cache-tier on our production clusters, and there is a work around it (by manually creating a CephFS client key), this is nothing serious or urgent; but I thought I should let you guys know.   

Again, thanks for your help!

Mami



On Thu, Jan 23, 2020 at 8:40 AM Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
On Thu, Jan 23, 2020 at 2:36 PM Ilya Dryomov <idryomov@xxxxxxxxx> wrote:
>
> On Wed, Jan 22, 2020 at 6:18 PM Hayashida, Mami <mami.hayashida@xxxxxxx> wrote:
> >
> > Thanks, Ilya.
> >
> > I just tried modifying the osd cap for client.testuser by getting rid of "tag cephfs data="" part and confirmed this key does work (i.e. lets the CephFS client read/write).  It now reads:
> >
> > [client.testuser]
> > key = XXXYYYYZZZ
> > caps mds = "allow rw"
> > caps mon = "allow r"
> > caps osd = "allow rw"  // previously "allow rw tag cephfs data=""> > >
> > I tried removing either "tag cephfs" or "data="" (and leaving the other), but neither worked.
> >
> > Now, here is my question: will not having the "allow rw tag cephfs data="" system name>" (under osd caps) result in a security/privacy loophole in a production cluster?   (I am still trying to assess whether having a Cache Tier behind CephFS is worth all the headaches...)
>
> It's probably not worth it.  Unless you have a specific tiered
> workload in mind and your cache pool is large enough for it, I'd
> recommend staying away from cache tiering.
>
> "allow rw" for osd is only marginally more restrictive than
> client.admin's "allow *", allowing the user to read/write every object
> in the cluster.  Scratch my reply about doing it by hand -- try the
> following:
>
>   $ ceph osd pool application enable cephfs-data-cache cephfs
>   $ ceph osd pool application set cephfs-data-cache cephfs data cephfs_test
>   $ ceph fs authorize cephfs_test ...  (as before)
>
> You will see the same "allow rw tag cephfs data="" cap in
> "ceph auth list" output, but it should allow accessing cephfs-data-cache.

Dropping ceph-users@xxxxxxxxxxxxxx and resending to ceph-users@xxxxxxx.

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux