RBD quotas and OpenStack Cinder

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi folks,

We're using Ceph behind OpenStack Cinder, so we already have external quotas and mediated access to our cluster. We want to serve multiple Cinder services out of the same cluster, connected to different storage pools (we have different IaaS tenants).

An issue in this space is that volume users are almost never using their full quota, there are at least a couple of reasons: 1) storage inherently gets consumed gradually, 2) users often create much larger volumes than they need. Because of this if we restrict the amount of quota we give out to be <= pool_capacity then we will probably never get very good utilisation of the underlying storage, which suggests some level of over-commit at the Cinder quota level would be reasonable.

However!
Any level of over-commit opens up the cluster to the risk of filling, something Ceph (and I'm sure other block-storage providers) doesn't like. Ideally I'd like to be able to give out 2x as much Cinder storage quota as we have capacity, but there doesn't seem to be a safe way of doing this at the moment. I thought pool quotas might be the thing, but sadly I'm informed the current implementation is based on real usage, so the quota is only enforced once a user actually writes data over quota. This then results in ENOSPC for all clients of that pool! I'm not really sure how such a quota would ever be useful in practice (definitely not something that can be used in production)...

One possibility I think would be useful is an RBD specific quota based on "provisioned" RBD size, so it would not allow creation/extension of new/existing RBDs if the full size of all RBDs would exceed the quota. Such a quota would at least allow us to safely give out more Cinder quota, but it would not fundamentally help with the fact that not al volumes are full - but that's more an issue of user education now that growing volumes is possible with Havana Cinder.

Thoughts?

--
Cheers,
~Blairo

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux