Erasure Coded Pools and OpenStack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings all!

 

I’m currently attempting to create an EC pool for my glance images, however when I save an image through the OpenStack command line, the data is not ending up in the EC pool.

So a little information on what I’ve done so far.

The way that I understand things to work is that you need a metadata pool to front the EC pool; I created and  ‘images’ pool and then created an ‘images_data_ec’ pool.

Following are the steps I used.

 

First, I created my EC profile:

 

ceph osd erasure-code-profile set 2-1 k=2 m=1 crush-device-class=hdd

 

I used the values of k=2 and m=1 to match my dev cluster config (which is only three OSD servers (10x4TB OSDs and 2 SSD (for journal) per server) which is set to a failure domain: server)

 

Then I created my pools:

 

ceph osd pool create images 16

ceph osd pool create images_data_ec 128 erasure 2-1

ceph osd pool application enable images images_data_ec

ceph osd pool application enable images_data_ec rbd

ceph osd pool set images_data_ec allow_ec_overwrites true

 

Then I added the following to my ceph.conf to tell ceph to use the images_data_ec pool when the glance user in invoked. I then restarted all the ceph services on all the nodes.

 

[client.glance]

rbd default data pool = images_data_ec

 

So with that configured I used the rbd cli to create an image:

 

rbd create images/myimage --size 1G --data-pool images_data_ec

 

Then checked the image details:

 

rbd -p images --image myimage info

rbd image 'myimage':

                size 1024 GB in 262144 objects

                order 22 (4096 kB objects)

                data_pool: images_data_ec

                block_name_prefix: rbd_data.162.6fdb4874b0dc51

                format: 2

                features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, data-pool

                flags:

                create_timestamp: Thu Mar 22 16:29:03 2018

 

This looks okay so I continued on and uploaded an image through the OpenStack cli:

 

OpenStack image create --disk-format qcow2 --unprotected --public --file cirros-0.4.0-x86_64-disk.img cirros-test-image

 

However, when I inspect the image I see that it is not using the data pool as expected:

 

rbd -p images --image 91147e95-3e3d-4dc1-934d-bcaad7f645be info

rbd image '91147e95-3e3d-4dc1-934d-bcaad7f645be':

                size 12418 kB in 2 objects

                order 23 (8192 kB objects)

                block_name_prefix: rbd_data.6fdbe73cf7c855

                format: 2

                features: layering, exclusive-lock, object-map, fast-diff, deep-flatten

                flags:

                create_timestamp: Thu Mar 22 16:29:37 2018

 

When I look at the usage of the two pools only the images pool has any data in it. Also, when I query the EC pool for a list of images, it returns empty, even though there should be something from the rbd cli uploaded image in there (or so I thought). Should there be something to query in the EC pool to prove data is being written there?

 

So far, I have been unable to get this to work and I’m completely at a loss.

Does anyone have any experience with this configuration and or maybe some guidance for getting it to work?

Ideally, I want to configure OpenStack to make use of EC pools for bulk data and then use replicated pools for active data (such as OS volumes and database volumes).

Thank you for taking the time to read this far.

I am happy to provide any further details you might need or try any configuration changes you might suggest. This is completely development so I’m not afraid to try things.

 

Cheers,

Mike

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux