Re: Automatic OSD start on Jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/04/2017 12:18 PM, Fabian Grünbichler wrote:
> On Wed, Jan 04, 2017 at 12:03:39PM +0100, Florent B wrote:
>> Hi everyone,
>>
>> I have a problem with automatic start of OSDs on Debian Jessie with Ceph
>> Jewel.
>>
>> My osd.0 is using /dev/sda5 for data and /dev/sda2 for journal, it is
>> listed in ceph-disk list :
>>
>> /dev/sda :
>>  /dev/sda1 other, 21686148-6449-6e6f-744e-656564454649
>>  /dev/sda3 other, linux_raid_member
>>  /dev/sda4 other, linux_raid_member
>>  /dev/sda2 ceph journal, for /dev/sda5
>>  /dev/sda5 ceph data, active, cluster ceph, osd.0, journal /dev/sda2
>>
>> It was created with ceph-disk prepare.
>>
>> When I run "ceph-disk activate /dev/sda5", it is mounted and started.
>>
>> If I run "systemctl start ceph-disk@/dev/sda5", the same, it's OK. But
>> this is a service that can't be "enabled" !!
>>
>> But on reboot, nothing happen. The only thing which tries to start is
>> ceph-osd@0 service (enabled by ceph-disk, not me), and of course it
>> fails because its data is not mounted.
>>
>> I think udev rules should do this, but it does not seem to.
>>
>>
>> root@host102:~# sgdisk -i 2 /dev/sda
>> Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (Unknown)
>> Partition unique GUID: D0F4F00F-723D-4DAD-BA2E-93D52EB564C1
>> First sector: 2048 (at 1024.0 KiB)
>> Last sector: 9765887 (at 4.7 GiB)
>> Partition size: 9763840 sectors (4.7 GiB)
>> Attribute flags: 0000000000000000
>> Partition name: 'ceph journal'
>>
>> root@host102:~# sgdisk -i 5 /dev/sda
>> Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
>> Partition unique GUID: 5AB4F732-AFBE-4DEA-A4C6-AD290C1302D9
>> First sector: 123047424 (at 58.7 GiB)
>> Last sector: 1953459199 (at 931.5 GiB)
>> Partition size: 1830411776 sectors (872.8 GiB)
>> Attribute flags: 0000000000000000
>> Partition name: 'ceph data'
>>
>>
>> Does someone have an idea of what's going on ?
>>
>> Thank you.
>>
>> Florent
> are you using the packages from ceph.com? if so, you might be affected
> by http://tracker.ceph.com/issues/18305 (and
> http://tracker.ceph.com/issues/17889)
>
> did you mask the ceph.service unit generated from the ceph init script?
>
> what does "systemctl status '*ceph*'" show? what does "journalctl -b
> '*ceph*'" show?
>
> what happens if you run "ceph-disk activate-all"? (this is what is
> called last in the init script and will probably trigger mounting of the
> OSD disk/partition and starting of the ceph-osd@..  service)
>

Thank you, that was the problem : I disabled ceph.service unit because I
thought it was an "old" thing, I didn't knew it is always used.
Re-enabling it did the trick.

Isn't it an "old way" of doing things ?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux