Re: something missing in filestore to bluestore conversion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 7, 2018 at 7:24 AM, Eugen Block <eblock@xxxxxx> wrote:
> Hi,
>
> I'm not sure if this is deprecated or something, but I usually have to
> execute an additional "ceph auth del <ID>" before recreating an OSD.
> Otherwise the OSD fails to start. Maybe this is a missing step.
>
> Regards,
> Eugen
>
>
> Zitat von Gary Molenkamp <molenkam@xxxxxx>:
>
>
>> Good morning all,
>>
>> Last week I started converting my filestore based OSDs to bluestore using
>> the following steps assembled from documentation and mailing list:
>>
>> admin:  ceph osd out ${OSD_ID}
>>
>> on stor-node:
>> systemctl kill ceph-osd@${OSD_ID}
>> umount /var/lib/ceph/osd/ceph-${OSD_ID}
>> ceph-disk zap /dev/sdX
>> ceph-volume lvm zap /dev/sdx
>>
>> on admin:
>> ceph osd destroy ${OSD_ID} --yes-i-really-mean-it
>> ceph osd purge ${OSD_ID} --yes-i-really-mean-it
>>
>> on stor-node:
>> ceph-volume lvm prepare --bluestore --data /dev/sde --block.db /dev/sdc1

This is missing a super important step.

The "prepare" step in ceph-volume, much like in ceph-disk, only makes
sure that all the devices are mounted and the
necessary parts for the OSD to run exist.

It is not *activated*, that is: it will not start the OSD and it will
not come up after boot. I've always hesitated on the compatible
prepare/activate that ceph-disk had
for this reason. "activate" is also in charge of creating the systemd
units, so all of this is automatic. There should never be a need to
start the OSD manually.

If you are not trying to prevent a situation where multiple OSDs are
being brought up again, you can just use 'create'  (it takes the same
exact flags):

    ceph-volume lvm create --bluestore --data /dev/sde --block.db /dev/sdc1

Otherwise, you would be required to know the OSD ID and the FSID:

    ceph-volume lvm activate $OSD_ID $OSD_FSID

This is explained in the first section of the "lvm prepare" docs:
http://docs.ceph.com/docs/master/ceph-volume/lvm/prepare

    Note: This is part of a two step process to deploy an OSD. If
looking for a single-call way, please see create




>> systemctl start ceph-osd@${OSD_ID}
>>
>>
>> However, after a reboot of the storage node, none of the OSD's are
>> starting.  I noticed that the tmpfs based filesystems mounted on
>> /var/lib/ceph/osd/ceph-X prior to reboot do not exist.  Did I miss a step
>> when converting to bluestore?  Was there a missed step when converting to
>> ceph-volume?
>>
>> The documentation step for ceph-volume does mention an activation step
>> that didn't appear needed after the above prepare (the OSD mounted and
>> started recovery when the "systemctl start ceph-osd@0" was issued.
>>

>>
>> Note:  I can see all of the lvm volumes on the storage node:
>>
>> # pvscan
>>   PV /dev/sdg   VG ceph-9854968e-659b-432a-b816-8ce3400e90d3 lvm2 [<2.73
>> TiB / 0    free]
>>   PV /dev/sdi   VG ceph-08abf9b3-fae0-4113-b889-0bb218e6d613 lvm2 [<2.73
>> TiB / 0    free]
>>   PV /dev/sdh   VG ceph-cddd89e5-87b2-4d8a-b905-03bd7c50a429 lvm2 [<2.73
>> TiB / 0    free]
>>   PV /dev/sde   VG ceph-baa8d599-02ab-4a55-9b2f-a82fef253df8 lvm2 [<2.73
>> TiB / 0    free]
>>   PV /dev/sdf   VG ceph-2ea66a6f-762d-4f46-8ec9-ff9f4a114ef9 lvm2 [<2.73
>> TiB / 0    free]
>>   PV /dev/sdk   VG ceph-6bb12ce7-042b-4ffe-bf7a-62296dc36fa2 lvm2 [<2.73
>> TiB / 0    free]
>>   PV /dev/sdl   VG ceph-7a6a54c9-664f-499a-83ad-7396d801ab3f lvm2 [<2.73
>> TiB / 0    free]
>>   PV /dev/sdj   VG ceph-50effcbd-a926-4a7f-bb00-62a6fa5b6aec lvm2 [<2.73
>> TiB / 0    free]
>>
>> But if I issue the following it just hangs/timeouts:
>>
>> systemctl start ceph-volume@lvm-0-baa8d599-02ab-4a55-9b2f-a82fef253df8
>>
>>
>> Any assistance would be appreciated.
>>
>> Gary
>>
>>
>> --
>> Gary Molenkamp                  Computer Science/Science Technology
>> Services
>> Systems Administrator           University of Western Ontario
>> molenkam@xxxxxx                 http://www.csd.uwo.ca
>> (519) 661-2111 x86882           (519) 661-3566
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux