Re: Ceph-deploy OSD Prepare error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 16 May 2013, Ian_M_Porter@xxxxxxxx wrote:
> Yeah that's what I did, created another server and I got HEALTH_OK :)
> 
> I guess I was a bit confused with the osd_crush_chooseleaf_type setting 
> in the ceph.conf file.  I had this set to 0 which, according to the 
> documentation, it should peer with my 2 OSDs on the single node (clearly 
> it didn't or I've misunderstood what this does).

Note that that setting only has an effect when teh cluster is first 
created (when you deploy the first monitor(s)).  After that, the CRUSH 
rules are already created, and need to be modified.

> On a side note, can someone point me to the documentation (if any) that 
> describes how you can configure ceph-deploy to add/change the conf file.  
> At the moment, I have a vanilla set of items but I can't seem to add new 
> values.  I had tried updating the ceph.conf file on my cep-deploy admin 
> node and calling a ceph-deploy config push, but the config file that was 
> deployed to the OSD nodes, didn't contain the changes. I suspect the 
> file is being generated by the ceph-deploy software, is this correct?

 ceph-deploy config push HOST

should copy the ceph.conf in teh local directory to the remote node.  If 
you can confirm that that isn't working, let us know!

Thanks-
sage


> Ian
> 
> 
> 
> -----Original Message-----
> From: Travis Rhoden [mailto:trhoden@xxxxxxxxx] 
> Sent: 16 May 2013 14:54
> To: Porter, Ian M
> Cc: igor.laskovy@xxxxxxxxx; ceph-users
> Subject: Re:  Ceph-deploy OSD Prepare error
> 
> Ian,
> 
> If you are only running one server (with 2 OSDs), you should probably take a look at your CRUSH map.  I haven't used ceph-deploy myself yet, but with mkcephfs the default CRUSH map is constructed such that the 2 replicas must be on different hosts, not just on different OSDs.  This is so that if you lose that one host, you don't lose access to both copies of your data.  If this is indeed your setup, the PGs will be stuck forever.
> 
> While not recommended for a production setup, if you want to play around and get things to HEALTH_OK, you could modify your CRUSH map to just choose two OSDs instead of two hosts.
> 
>  - Travis
> 
> On Wed, May 15, 2013 at 1:07 PM,  <Ian_M_Porter@xxxxxxxx> wrote:
> > Hi Igor,
> >
> >
> >
> > I've tried that plus some other combinations and each time I get the
> > "ValueError: need more than 3 values to unpack" message
> >
> >
> >
> > I seem to have partial success, for instance I can run the ceph-deploy 
> > activate command  and that returns successfully, however if I check 
> > the health of the cluster I get
> >
> >
> >
> > HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 21/42 
> > degraded (50.000%), so somethings clearly not right
> >
> >
> >
> > My setup is very simple with my ceph admin node and ceph server node 
> > running Ubuntu 12.04 within VMWare workstation.  The ceph server node 
> > has 2 additional disks, sdb and sdc, on which I am attempting to create the OSDs.
> >
> >
> >
> > Also my ceph.conf is vanilla, so using the defaults.
> >
> >
> >
> > Regards,
> >
> >
> >
> > Ian
> >
> >
> >
> > From: Igor Laskovy [mailto:igor.laskovy@xxxxxxxxx]
> > Sent: 15 May 2013 15:37
> > To: Porter, Ian M
> > Cc: ceph-users
> > Subject: Re:  Ceph-deploy OSD Prepare error
> >
> >
> >
> > Hi, Ian,
> >
> >
> >
> > try "ceph-deploy osd prepare ceph-server:sdc1"
> >
> > If you have used "ceph-deploy disk zap" it creates single partition.
> >
> >
> >
> > On Wed, May 15, 2013 at 4:51 PM, <Ian_M_Porter@xxxxxxxx> wrote:
> >
> > Hi,
> >
> >
> >
> > I am deploying using ceph-deploy (following quickstart guide) and 
> > getting the following error on
> >
> >
> >
> > ceph-deploy osd prepare ceph-server:sdc
> >
> >
> >
> >
> >
> >> ValueError: need more than 3 values to unpack
> >
> >> > Traceback (most recent call last):
> >
> >>   File "/home/user/ceph-deploy/ceph_deploy/osd.py", line 426, in osd
> >
> >> >   File "/home/user/ceph-deploy/ceph-deploy", line 9, in <module>
> >
> >> >     load_entry_point('ceph-deploy==0.1', 'console_scripts',
> >> > 'ceph-deploy')()
> >
> >> >   File "/home/user/ceph-deploy/ceph_deploy/cli.py", line 112, in 
> >> > main
> >
> >> >     return args.func(args)
> >
> >> >   File "/home/user/ceph-deploy/ceph_deploy/osd.py", line 426, in 
> >> > osd
> >
> >> >     prepare(args, cfg, activate_prepared_disk=False)
> >
> >> >   File "/home/user/ceph-deploy/ceph_deploy/osd.py", line 269, in 
> >> > prepare
> >
> >> >     dmcrypt_dir=args.dmcrypt_key_dir,
> >
> >> > ValueError: need more than 3 values to unpack
> >
> >
> >
> > Any suggestions?
> >
> >
> >
> > Regards
> >
> >
> >
> > Ian
> >
> >
> >
> > Dell Corporation Limited is registered in England and Wales. Company 
> > Registration Number: 2081369 Registered address: Dell House, The 
> > Boulevard, Cain Road, Bracknell, Berkshire, RG12 1LF, UK.
> > Company details for other Dell UK entities can be found on  www.dell.co.uk.
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> >
> > --
> >
> > Igor Laskovy
> > facebook.com/igor.laskovy
> >
> > studiogrizzly.com
> >
> > Dell Corporation Limited is registered in England and Wales. Company 
> > Registration Number: 2081369 Registered address: Dell House, The 
> > Boulevard, Cain Road, Bracknell, Berkshire, RG12 1LF, UK.
> > Company details for other Dell UK entities can be found on  www.dell.co.uk.
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> Dell Corporation Limited is registered in England and Wales. Company Registration Number: 2081369
> Registered address: Dell House, The Boulevard, Cain Road, Bracknell,  Berkshire, RG12 1LF, UK.
> Company details for other Dell UK entities can be found on  www.dell.co.uk.
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux