Re: something missing in filestore to bluestore conversion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Eugen,

The OSDs will start immediately after completing the "ceph-volume prepare", but they won't start on a clean reboot.   It seems that the "prepare" is mounting the /var/lib/ceph/osd/ceph-osdX path/structure but this is missing now in my boot process.

Gary.


On 2018-05-07 07:24 AM, Eugen Block wrote:
Hi,

I'm not sure if this is deprecated or something, but I usually have to execute an additional "ceph auth del <ID>" before recreating an OSD. Otherwise the OSD fails to start. Maybe this is a missing step.

Regards,
Eugen


Zitat von Gary Molenkamp <molenkam@xxxxxx>:

Good morning all,

Last week I started converting my filestore based OSDs to bluestore using the following steps assembled from documentation and mailing list:

admin:  ceph osd out ${OSD_ID}

on stor-node:
systemctl kill ceph-osd@${OSD_ID}
umount /var/lib/ceph/osd/ceph-${OSD_ID}
ceph-disk zap /dev/sdX
ceph-volume lvm zap /dev/sdx

on admin:
ceph osd destroy ${OSD_ID} --yes-i-really-mean-it
ceph osd purge ${OSD_ID} --yes-i-really-mean-it

on stor-node:
ceph-volume lvm prepare --bluestore --data /dev/sde --block.db /dev/sdc1
systemctl start ceph-osd@${OSD_ID}


However, after a reboot of the storage node, none of the OSD's are starting.  I noticed that the tmpfs based filesystems mounted on /var/lib/ceph/osd/ceph-X prior to reboot do not exist.  Did I miss a step when converting to bluestore?  Was there a missed step when converting to ceph-volume?

The documentation step for ceph-volume does mention an activation step that didn't appear needed after the above prepare (the OSD mounted and started recovery when the "systemctl start ceph-osd@0" was issued.


Note:  I can see all of the lvm volumes on the storage node:

# pvscan
  PV /dev/sdg   VG ceph-9854968e-659b-432a-b816-8ce3400e90d3 lvm2 [<2.73 TiB / 0    free]   PV /dev/sdi   VG ceph-08abf9b3-fae0-4113-b889-0bb218e6d613 lvm2 [<2.73 TiB / 0    free]   PV /dev/sdh   VG ceph-cddd89e5-87b2-4d8a-b905-03bd7c50a429 lvm2 [<2.73 TiB / 0    free]   PV /dev/sde   VG ceph-baa8d599-02ab-4a55-9b2f-a82fef253df8 lvm2 [<2.73 TiB / 0    free]   PV /dev/sdf   VG ceph-2ea66a6f-762d-4f46-8ec9-ff9f4a114ef9 lvm2 [<2.73 TiB / 0    free]   PV /dev/sdk   VG ceph-6bb12ce7-042b-4ffe-bf7a-62296dc36fa2 lvm2 [<2.73 TiB / 0    free]   PV /dev/sdl   VG ceph-7a6a54c9-664f-499a-83ad-7396d801ab3f lvm2 [<2.73 TiB / 0    free]   PV /dev/sdj   VG ceph-50effcbd-a926-4a7f-bb00-62a6fa5b6aec lvm2 [<2.73 TiB / 0    free]

But if I issue the following it just hangs/timeouts:

systemctl start ceph-volume@lvm-0-baa8d599-02ab-4a55-9b2f-a82fef253df8


Any assistance would be appreciated.

Gary


--
Gary Molenkamp            Computer Science/Science Technology Services
Systems Administrator        University of Western Ontario
molenkam@xxxxxx                 http://www.csd.uwo.ca
(519) 661-2111 x86882        (519) 661-3566

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

--
Gary Molenkamp			Computer Science/Science Technology Services
Systems Administrator		University of Western Ontario
molenkam@xxxxxx                 http://www.csd.uwo.ca
(519) 661-2111 x86882		(519) 661-3566

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux