Multi Tenancy in Ceph RBD Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ceph Users
I am relatively new to Ceph and trying to Provision CEPH RBD Volumes using Kubernetes.

I would like to know what are the best practices for hosting a multi tenant CEPH cluster. Specifically i have the following questions:-

- Is it ok to share a single Ceph Pool amongst multiple tenants ?  If yes, how do you guarantee that volumes of one Tenant are not  accessible(mountable/mapable/unmappable/deleteable/mutable) to other tenants ?
- Can a single Ceph Pool have multiple admin and user keyrings generated for rbd create and rbd map commands ? This way i want to assign different keyrings to each tenant 

- can a rbd map command be run remotely for any node on which we want to mount RBD Volumes or it must be run from the same node on which we want to mount ? Is this going to be possible in the future ?

- In terms of ceph fault tolerance and resiliency, is one ceph pool per customer a better design or a single pool must be shared with mutiple customers
- In a single pool for all customers, how can we get the ceph statistics per customer ? Is it possible to somehow derive this from the RBD volumes ?

Thanks for your responses
Mayank
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux