Re: Stackforge Puppet Module

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What comes to mind is that you need to make sure that you've cloned the git repository to /etc/puppet/modules/ceph and not /etc/puppet/modules/puppet-ceph.

Feel free to hop on IRC to discuss about puppet-ceph on freenode in #puppet-openstack.
You can find me there as dmsimard.

--
David Moreau Simard

> On Nov 12, 2014, at 8:58 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
> 
> Hi David,
> 
> Many thanks for your reply.
> 
> I must admit I have only just started looking at puppet, but a lot of what
> you said makes sense to me and understand the reason for not having the
> module auto discover disks.
> 
> I'm currently having a problem with the ceph::repo class when trying to push
> this out to a test server:-
> 
> Error: Could not retrieve catalog from remote server: Error 400 on SERVER:
> Could not find class ceph::repo for ceph-puppet-test on node
> ceph-puppet-test
> Warning: Not using cache on failed catalog
> Error: Could not retrieve catalog; skipping run
> 
> I'm a bit stuck but will hopefully work out why it's not working soon and
> then I can attempt your idea of using a script to dynamically pass disks to
> the puppet module.
> 
> Thanks,
> Nick
> 
> 
> -----Original Message-----
> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
> David Moreau Simard
> Sent: 11 November 2014 12:05
> To: Nick Fisk
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Stackforge Puppet Module
> 
> Hi Nick,
> 
> The great thing about puppet-ceph's implementation on Stackforge is that it
> is both unit and integration tested.
> You can see the integration tests here:
> https://github.com/ceph/puppet-ceph/tree/master/spec/system
> 
> Where I'm getting at is that the tests allow you to see how you can use the
> module to a certain extent.
> For example, in the OSD integration tests:
> -
> https://github.com/ceph/puppet-ceph/blob/master/spec/system/ceph_osd_spec.rb
> #L24 and then:
> -
> https://github.com/ceph/puppet-ceph/blob/master/spec/system/ceph_osd_spec.rb
> #L82-L110
> 
> There's no auto discovery mechanism built-in the module right now. It's kind
> of dangerous, you don't want to format the wrong disks.
> 
> Now, this doesn't mean you can't "discover" the disks yourself and pass them
> to the module from your site.pp or from a composition layer.
> Here's something I have for my CI environment that uses the $::blockdevices
> fact to discover all devices, split that fact into a list of the devices and
> then reject the drives I don't want (such as the OS disk):
> 
>    # Assume OS is installed on xvda/sda/vda.
>    # On an Openstack VM, vdb is ephemeral, we don't want to use vdc.
>    # WARNING: ALL OTHER DISKS WILL BE FORMATTED/PARTITIONED BY CEPH!
>    $block_devices = reject(split($::blockdevices, ','),
> '(xvda|sda|vda|vdc|sr0)')
>    $devices = prefix($block_devices, '/dev/')
> 
> And then you can pass $devices to the module.
> 
> Let me know if you have any questions !
> --
> David Moreau Simard
> 
>> On Nov 11, 2014, at 6:23 AM, Nick Fisk <nick@xxxxxxxxxx> wrote:
>> 
>> Hi,
>> 
>> I'm just looking through the different methods of deploying Ceph and I 
>> particularly liked the idea that the stackforge puppet module 
>> advertises of using discover to automatically add new disks. I 
>> understand the principle of how it should work; using ceph-disk list 
>> to find unknown disks, but I would like to see in a little more detail on
> how it's been implemented.
>> 
>> I've been looking through the puppet module on Github, but I can't see 
>> anyway where this discovery is carried out.
>> 
>> Could anyone confirm if this puppet modules does currently support the 
>> auto discovery and where  in the code its carried out?
>> 
>> Many Thanks,
>> Nick
>> 
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux