Re: OpenStack and ceph integration with puppet

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 08/10/2013 15:47, Sébastien Han wrote:
> Hi Loïc,
> 
> Actually they are some steps that might be automated such as:
> 
> * set the virsh secret
> * create both glance and cinder pools

Where do you think it might already be automated ?

> Please take a look at:
> 
> * https://github.com/dontalton/puppet-cephdeploy/blob/master/manifests/init.pp#L121
> * https://github.com/dontalton/puppet-cephdeploy/blob/master/manifests/osd.pp#L73

Right ! I overlooked this puppet module.

> For the rest this might be already done but your puppet manifests.

I plan to not write any manifest :-)

> Please also note that http://ceph.com/docs/next/rbd/rbd-openstack/ will need some updates for OpenStack Havana.

Cheers

> ––––
> Sébastien Han
> Cloud Engineer
> 
> "Always give 100%. Unless you're giving blood.”
> 
> Phone: +33 (0)1 49 70 99 72
> Mail: sebastien.han@xxxxxxxxxxxx
> Address : 10, rue de la Victoire - 75009 Paris
> Web : www.enovance.com - Twitter : @enovance
> 
> On October 8, 2013 at 4:18:00 PM, Loic Dachary (loic@xxxxxxxxxxx) wrote:
> 
> Hi Ceph,  
> 
> Binding ceph to cinder and glance using puppet requires three steps:  
> 
> * Deploy ceph ( with ceph-deploy, puppet, chef ... )  
> 
> * Follow the ceph documentation instructions ( valid for both cinder and glance )  
> 
> http://ceph.com/docs/next/rbd/rbd-openstack/  
> 
> * Part of the above instructions can be skipped if the following are used  
> 
> https://github.com/stackforge/puppet-glance/blob/stable/grizzly/manifests/backend/rbd.pp  
> https://github.com/stackforge/puppet-cinder/blob/stable/grizzly/manifests/volume/rbd.pp  
> 
> They take care of installing a package on the glance and cinder node and writing the cinder and glance config files.  
> 
> * Upgrading the librbd on the compute hosts to match the version of the cluster ( ubuntu precise has bobtail but you may want at least cuttlefish )  
> 
> I would be delighted to know if there is a simpler way. If not, would it make sense to provide the puppet master with the IP of the monitors and admin rights so that it can automate http://ceph.com/docs/next/rbd/rbd-openstack/ ?  
> 
> * install ceph-common on cinder hosts and python-ceph on glance hosts  
> * set the monitor addresses  
> * copy the keyring to cinder / glance  
> * create the client.volumes / client.images users ( support <= 0.53 ? )  
> * upgrade the librbd package on the compute hosts to the version matching the cluster  
> * virsh secret-set-value the volume key on each compute host  
> * reload glance/nova/cinder where appropriate  
> 
> The puppet master could even refresh the list of monitors from time to time and update the cinder/glance nodes accordingly. And it could do the right thing depending on the target openstack version and ceph version.  
> 
> Thoughts ?  
> 
> --  
> Loïc Dachary, Artisan Logiciel Libre  
> All that is necessary for the triumph of evil is that good people do nothing.
> 

-- 
Loïc Dachary, Artisan Logiciel Libre
All that is necessary for the triumph of evil is that good people do nothing.

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux