Quota Management in CEPH

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/21/2014 03:29 PM, Vilobh Meshram wrote:
> Hi All,
>
> I want to understand on how do CEPH users go about Quota Management when
> CEPH is used with Openstack.
>
>  1. Is it recommended to use a common pool say ?volumes? for creating
>     volumes which is shared by all tenants ? In this case a common
>     keyring ceph.common.keyring will be shared across all the
>     tenants/common volume pool.

Yes, using a common pool is recommended. More pools take up more cpu and
memory on the osds, since placement groups (shards of pools) are the 
unit of recovery. Having a pool per tenant would be a scaling issue.

There is a further level of division in rados called a 'namespace',
which can provide finer-grained cephx security within a pool, but
rbd does not support it yet, and as it stands it would not be useful
for quotas [1].

>  2. Or is it recommended to use a pool for each tenant say ?volume1 pool
>     for tenant1? , ?volume2 pool for tenant2" ?  In this case we will
>     have a keyring per volume pool/ tenant I.e. Keyring 1 for
>     volume/tenant1 and so on.
>
> Considering both of these cases how do we guarantee that we enforce a
> quota for each user inside a tenant say a quota of say 5 volumes to be
> created by each user.

When using OpenStack, Cinder does the quota management for volumes based
on its database, and can limit total space, number of volumes and
number of snapshots [2]. RBD is entirely unaware of OpenStack tenants.

Josh

[1] http://wiki.ceph.com/Planning/Sideboard/rbd%3A_namespace_support
[2] 
http://docs.openstack.org/user-guide-admin/content/cli_set_quotas.html#cli_set_block_storage_quotas


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux