This isn't a solution to fix them not starting at boot time, but a fix to not having to reboot the node again. `ceph-disk activate-all` should go through and start up the rest of your osds without another reboot.
On Wed, Aug 23, 2017 at 9:36 AM Sean Purdy <s.purdy@xxxxxxxxxxxxxxxx> wrote:
Hi,
Luminous 12.1.1
I've had a couple of servers where at cold boot time, one or two of the OSDs haven't mounted/been detected. Or been partially detected. These are luminous Bluestore OSDs. Often a warm boot fixes it, but I'd rather not have to reboot the node again.
Sometimes /var/lib/ceph/osd/ceph-NN is empty - i.e. not mounted. And sometimes /var/lib/ceph/osd/ceph-NN is mounted, but the /var/lib/ceph/osd/ceph-NN/block symlink is pointing to a /dev/mapper UUID path that doesn't exist. Those partitions have to be mounted before "systemctl start ceph-osd@NN.service" will work.
What happens at disk detect and mount time? Is there a timeout somewhere I can extend?
How can I tell udev to have another go at mounting the disks?
If it's in the docs and I've missed it, apologies.
Thanks in advance,
Sean Purdy
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com