Hello Michal,
With cephfs and a single filesystem shared across multiple k8s clusters,
you should subvolumegroups to limit data exposure. You'll find an
example of how to use subvolumegroups in the ceph-csi-cephfs helm chart
[1]. Essentially you just have to set the subvolumeGroup to whatever you
like and then create the associated cephfs keyring with the following caps:
ceph auth get-or-create client.cephfs.k8s-cluster-1.admin mon "allow r"
osd "allow rw tag cephfs *=*" mds "allow rw
path=/volumes/csi-k8s-cluster-1" mgr "allow rw" -o
/etc/ceph/client.cephfs.k8s-cluster-1.admin.keyring
caps: [mds] allow rw path=/volumes/csi-k8s-cluster-1
caps: [mgr] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs *=*
The subvolume group will be created by ceph-csi-cephfs if I remember
correctly but you can also take care of this on the ceph side with 'ceph
fs subvolumegroup create cephfs csi-k8s-cluster-1'.
PVs will then be created as subvolumes in this subvolumegroup. To list
them, use 'ceph fs subvolume ls cephfs --group_name=csi-k8s-cluster-1'.
To achieve the same goal with RBD images, you should use rados
namespaces. The current helm chart [2] seems to lack information about
the radosNamespace setting but it works effectively considering you set
it as below:
csiConfig:
- clusterID: "<cluster-id>"
monitors:
- "<MONValue1>"
- "<MONValue2>"
radosNamespace: "k8s-cluster-1"
ceph auth get-or-create client.rbd.name.admin mon "profile rbd" osd
"allow rwx pool <your_k8s_pool> object_prefix rbd_info, allow rwx pool
<your_k8s_pool> namespace k8s-cluster-1" mgr "profile rbd
pool=<your_k8s_pool> namespace=k8s-cluster-1" -o
/etc/ceph/client.rbd.name.admin.keyring
caps: [mon] profile rbd
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=<your_k8s_pool> namespace=k8s-cluster-1
ceph auth get-or-create client.rbd.name.user mon "profile rbd" osd
"allow class-read object_prefix rbd_children, allow rwx
pool=<your_k8s_pool> namespace=k8s-cluster-1" -o
/etc/ceph/client.rbd.name.user.keyring
caps: [mon] profile rbd
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
pool=<your_k8s_pool> namespace=k8s-cluster-1
Capabilities required for ceph-csi-cephfs and ceph-csi-rbd are described
here [3].
This should get you started. Let me know if you see any clever/safer
caps to use.
Regards,
Frédéric.
[1]
https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-cephfs/values.yaml#L20
[2]
https://github.com/ceph/ceph-csi/blob/devel/charts/ceph-csi-rbd/values.yaml#L20
[3] https://github.com/ceph/ceph-csi/blob/devel/docs/capabilities.md
--
Cordialement,
Frédéric Nass
Direction du Numérique
Sous-direction Infrastructures et Services
Tél : 03.72.74.11.35
Le 20/01/2022 à 09:26, Michal Strnad a écrit :
Hi,
We are using CephFS in our Kubernetes clusters and now we are trying
to optimize permissions/caps in keyrings. Every guide which we found
contains something like - Create the file system by specifying the
desired settings for the metadata pool, data pool and admin keyring
with access to the entire file system ... Is there better way where we
don't need admin key, but restricted key only? What are you using in
your environments?
Multiple file systems isn't option for us.
Thanks for your help
Regards,
Michal Strnad
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx