Can ceph-deploy be used with 'osd objectstore = keyvaluestore-dev' in config file ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have got my ceph OSDs running with keyvalue store now!

Thank Mark! I have been confused for a whole week.

============
Cheers
Aegeaner


? 2014-09-24 10:46, Mark Kirkwood ??:
> On 24/09/14 14:29, Aegeaner wrote:
>> I run ceph on Red Hat Enterprise Linux Server 6.4 Santiago, and when I
>> run "service ceph start" i got:
>>
>> # service ceph start
>>
>>     ERROR:ceph-disk:Failed to activate
>>     ceph-disk: Does not look like a Ceph OSD, or incompatible version:
>>     /var/lib/ceph/tmp/mnt.I71N5T
>>     mount: /dev/hioa1 already mounted or /var/lib/ceph/tmp/mnt.02sVHj 
>> busy
>>     ceph-disk: Mounting filesystem failed: Command '['/bin/mount', '-t',
>>     'xfs', '-o', 'noatime', '--',
>> '/dev/disk/by-parttypeuuid/4fbd7e29-9d25-41b8-afd0-062c0ceff05d.6d726c93-41f9-453d-858e-ab4132b5c8fd',
>>     '/var/lib/ceph/tmp/mnt.02sVHj']' returned non-zero exit status 32
>>     ceph-disk: Error: One or more partitions failed to activate
>>
>> Someone told me "service ceph start" still tries to call ceph-disk which
>> will create a filestore type OSD, and create a journal partition, is it
>> true?
>>
>> ls -l /dev/disk/by-parttypeuuid/
>>
>>     lrwxrwxrwx. 1 root root 11 9?  23 16:56
>> 45b0969e-9b03-4f30-b4c6-b4b80ceff106.00dbee5e-fb68-47c4-aa58-924c904c4383
>>     -> ../../hioa2
>>     lrwxrwxrwx. 1 root root 10 9?  23 17:02
>> 45b0969e-9b03-4f30-b4c6-b4b80ceff106.c30e5b97-b914-4eb8-8306-a9649e1c20ba
>>     -> ../../sdb2
>>     lrwxrwxrwx. 1 root root 11 9?  23 16:56
>> 4fbd7e29-9d25-41b8-afd0-062c0ceff05d.6d726c93-41f9-453d-858e-ab4132b5c8fd
>>     -> ../../hioa1
>>     lrwxrwxrwx. 1 root root 10 9?  23 17:02
>> 4fbd7e29-9d25-41b8-afd0-062c0ceff05d.b56ec699-e134-4b90-8f55-4952453e1b7e
>>     -> ../../sdb1
>>     lrwxrwxrwx. 1 root root 11 9?  23 16:52
>> 89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be.6d726c93-41f9-453d-858e-ab4132b5c8fd
>>     -> ../../hioa1
>>
>> There seems to be two hioa1 partitions there, maybe remained from last
>> time I create the OSD using ceph-deploy osd prepare?
>>
>
> Crap - it is fighting you, yes - looks like the startup script has 
> tried to build an osd for you using ceph-disk (which will make two 
> partitions by default). So that's toasted the setup that your script did.
>
> Growl - that's made it more complicated for sure.
>
> If you re-run your script you'll blast away the damage that 'service' 
> did :-) , and take a look at /etc/init.d/ceph to see why it ignored 
> your osd.0 arg (I'm not sure what it expects - maybe just 'osd'). 
> Anyway experiment.
>
> You can always start the osd with:
>
> $ sudo ceph-osd -i 0
>
> which bypasses the whole system startup confusion completely :-)
>
> Cheers
>
> Mark
>
>



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux