Re: Erasure Coded Pools and OpenStack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you for getting back to me so quickly.

Your suggestion of adding the config change in ceph.conf was a great one. That helped a lot. I didn't realize that the client would need to be updated and thought that it was a cluster side modification only. 

Something else that I missed was giving full permissions to the glance user. 

When I ran your rbd command as suggested I received a permission error which tweaked me to look at the permissions and realized that I had not added a permission rule for the glance user.

I can confirm that I have the data going into the EC pool from OpenStack!

Thanks again,

Mike

-----Original Message-----
From: Jason Dillaman <jdillama@xxxxxxxxxx>
Reply-To: "dillaman@xxxxxxxxxx" <dillaman@xxxxxxxxxx>
Date: Thursday, March 22, 2018 at 5:15 PM
To: Cave Mike <mcave@xxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re:  Erasure Coded Pools and OpenStack

On Fri, Mar 23, 2018 at 8:08 AM, Mike Cave <mcave@xxxxxxx> wrote:
> Greetings all!
>
>
>
> I’m currently attempting to create an EC pool for my glance images, however
> when I save an image through the OpenStack command line, the data is not
> ending up in the EC pool.
>
> So a little information on what I’ve done so far.
>
> The way that I understand things to work is that you need a metadata pool to
> front the EC pool; I created and  ‘images’ pool and then created an
> ‘images_data_ec’ pool.
>
> Following are the steps I used.
>
>
>
> First, I created my EC profile:
>
>
>
> ceph osd erasure-code-profile set 2-1 k=2 m=1 crush-device-class=hdd
>
>
>
> I used the values of k=2 and m=1 to match my dev cluster config (which is
> only three OSD servers (10x4TB OSDs and 2 SSD (for journal) per server)
> which is set to a failure domain: server)
>
>
>
> Then I created my pools:
>
>
>
> ceph osd pool create images 16
>
> ceph osd pool create images_data_ec 128 erasure 2-1
>
> ceph osd pool application enable images images_data_ec
>
> ceph osd pool application enable images_data_ec rbd
>
> ceph osd pool set images_data_ec allow_ec_overwrites true
>
>
>
> Then I added the following to my ceph.conf to tell ceph to use the
> images_data_ec pool when the glance user in invoked. I then restarted all
> the ceph services on all the nodes.
>
>
>
> [client.glance]
>
> rbd default data pool = images_data_ec
>
>
>
> So with that configured I used the rbd cli to create an image:
>
>
>
> rbd create images/myimage --size 1G --data-pool images_data_ec

With that "rbd default data pool = images_data_ec" configuration
override, you shouldn't need to specify the "--data-pool" optional.

>
> Then checked the image details:
>
>
>
> rbd -p images --image myimage info
>
> rbd image 'myimage':
>
>                 size 1024 GB in 262144 objects
>
>                 order 22 (4096 kB objects)
>
>                 data_pool: images_data_ec
>
>                 block_name_prefix: rbd_data.162.6fdb4874b0dc51
>
>                 format: 2
>
>                 features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten, data-pool
>
>                 flags:
>
>                 create_timestamp: Thu Mar 22 16:29:03 2018
>
>
>
> This looks okay so I continued on and uploaded an image through the
> OpenStack cli:
>
>
>
> OpenStack image create --disk-format qcow2 --unprotected --public --file
> cirros-0.4.0-x86_64-disk.img cirros-test-image
>
>
>
> However, when I inspect the image I see that it is not using the data pool
> as expected:

Have you ensured that your configuration override is on the glance
controller node? Have you confirmed that glance is configured to use
the "glance" user? If you run "rbd --id glance create images/<image
name> --size 1" is the image properly associated w/ the data pool?

>
> rbd -p images --image 91147e95-3e3d-4dc1-934d-bcaad7f645be info
>
> rbd image '91147e95-3e3d-4dc1-934d-bcaad7f645be':
>
>                 size 12418 kB in 2 objects
>
>                 order 23 (8192 kB objects)
>
>                 block_name_prefix: rbd_data.6fdbe73cf7c855
>
>                 format: 2
>
>                 features: layering, exclusive-lock, object-map, fast-diff,
> deep-flatten
>
>                 flags:
>
>                 create_timestamp: Thu Mar 22 16:29:37 2018
>
>
>
> When I look at the usage of the two pools only the images pool has any data
> in it. Also, when I query the EC pool for a list of images, it returns
> empty, even though there should be something from the rbd cli uploaded image
> in there (or so I thought). Should there be something to query in the EC
> pool to prove data is being written there?

The EC pool would only be used for data, so an "rbd ls images_data_ec"
is expected to return zero images (since the images are registered in
the "images" pool in your case).
>
> So far, I have been unable to get this to work and I’m completely at a loss.
>
> Does anyone have any experience with this configuration and or maybe some
> guidance for getting it to work?
>
> Ideally, I want to configure OpenStack to make use of EC pools for bulk data
> and then use replicated pools for active data (such as OS volumes and
> database volumes).
>
> Thank you for taking the time to read this far.
>
> I am happy to provide any further details you might need or try any
> configuration changes you might suggest. This is completely development so
> I’m not afraid to try things.
>
>
>
> Cheers,
>
> Mike
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Jason

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux