Re: Multi Tenancy in Ceph RBD Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't know specifics on Kubernetes or creating multiple keyrings for servers, so I'll leave those for someone else.  I will say that if you are kernel mapping your RBDs, then the first tenant to do so will lock the RBD and no other tenant can map it.  This is built into Ceph.  The original tenant would need to unmap it for the second to be able to access it.  This is different if you are not mapping RBDs and just using librbd to deal with them.

Multiple pools in Ceph are not free.  Pools are a fairly costly resource in Ceph because data for pools is stored in PGs, the PGs are stored and distributed between the OSDs in your cluster, and the more PGs an OSD has the more memory requirements that OSD has.  It does not scale infinitely.  If you are talking about one Pool per customer on a dozen or less customers, then it might work for your use case, but again it doesn't scale to growing the customer base.

RBD map could be run remotely via SSH, but that isn't what you were asking about.  I don't know of any functionality that allows you to use a keyring on server A to map an RBD on server B.

"Ceph Statistics" is VERY broad.  Are you talking IOPS, disk usage, throughput, etc?  disk usage is incredibly simple to calculate, especially if the RBD has object-map enabled.  A simple rbd du rbd_name would give you the disk usage per RBD and return in seconds.

On Mon, Jun 26, 2017 at 2:00 AM Mayank Kumar <krmayankk@xxxxxxxxx> wrote:
Hi Ceph Users
I am relatively new to Ceph and trying to Provision CEPH RBD Volumes using Kubernetes.

I would like to know what are the best practices for hosting a multi tenant CEPH cluster. Specifically i have the following questions:-

- Is it ok to share a single Ceph Pool amongst multiple tenants ?  If yes, how do you guarantee that volumes of one Tenant are not  accessible(mountable/mapable/unmappable/deleteable/mutable) to other tenants ?
- Can a single Ceph Pool have multiple admin and user keyrings generated for rbd create and rbd map commands ? This way i want to assign different keyrings to each tenant 

- can a rbd map command be run remotely for any node on which we want to mount RBD Volumes or it must be run from the same node on which we want to mount ? Is this going to be possible in the future ?

- In terms of ceph fault tolerance and resiliency, is one ceph pool per customer a better design or a single pool must be shared with mutiple customers
- In a single pool for all customers, how can we get the ceph statistics per customer ? Is it possible to somehow derive this from the RBD volumes ?

Thanks for your responses
Mayank
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux