Erasure coding RBD pool for OpenStack Glance, Nova and Cinder

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Cephers !

After having read since Luminuous that EC pools are now supported for writable 
RBD pools, I decided to use it in a new OpenStack Cloud deployment. The gain 
on storage is really noticeable, and I want to reduce the storage cost.

So I decided to use ceph-ansible to deploy the Ceph Cluster (Mimic 13.2.0), 
and openstack-ansible for the OpenStack (Queens) parts.

Since I started, I had some surprises :

1) ceph-ansible (stable-3.1) does not handle correctly EC pool creation. I had 
to modify the parameters order in the ansible role that launch rbd commands 
(OpenStack pool creation).

2) After deploying OpenStack, I try to load Glance images without success.
First, I forgot to enable write on the EC pool (ceph-ansible does not).
And then, I found that we cannot use directly an EC pool, but have to use a 
replicated one, backed by an EC pool for data.

Indeed, it is documented here, but I haven't read it until now :
http://docs.ceph.com/docs/master/rados/operations/erasure-code/?
highlight=erasure#erasure-coding-with-overwrites

3) OpenStack and so openstack-ansible, cannot be configured to specify this 
data pool

By chance, I found that link :
https://www.reddit.com/r/ceph/comments/72yc9m/ceph_openstack_with_ec/

So it's possible to use the Ceph user default config to have a default data 
pool, used by glance, cinder, nova users.
I still have to search if ceph-ansible will help me to configure that.

Now, I wonder what other surprises I will face. Does someone have used EC 
pools with OpenStack in production ?
I've read that small writes performances are not good for example. 

Ah, and just a little more question : can we do copy on write between an EC 
pool and a replicated one ?
If I store my OpenStack Images in Glance on an EC pool and then I create an 
instance on a replicated one (a performance oriented Cinder pool).

Please share your experience about EC pools and OpenStack !

Thanks,
--
Gilles


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux