Just realised the debug paste I sent was for OSD 5 but the other info is for OSD 0. They are both having the same issue, but for completeness sake here is the debug output from OSD 0: http://paste.debian.net/1211873/ All daemons in the cluster are running ceph pacific 16.2.5. Regards, Davíð On Wed, Sep 15, 2021 at 03:30:22PM +0000, Davíð Steinn Geirsson wrote: > Hi, > > I rebooted one of my ceph nodes this morning after OS updates. No ceph > packages were upgraded. After reboot, 4 out of 12 OSDs on this host refuse > to start, giving errors: > ``` > Sep 15 14:59:25 janky ceph-osd[12384]: 2021-09-15T14:59:24.994+0000 7f418196ef00 -1 bluestore(/var/lib/ceph/osd/ceph-0) _open_db erroring opening db: > Sep 15 14:59:25 janky ceph-osd[12384]: 2021-09-15T14:59:25.518+0000 7f418196ef00 -1 osd.0 0 OSD:init: unable to mount object store > Sep 15 14:59:25 janky ceph-osd[12384]: 2021-09-15T14:59:25.518+0000 7f418196ef00 -1 ** ERROR: osd init failed: (5) Input/output error > ``` > > Files and devices look okay: > ``` > root@janky:/var/lib/ceph/osd/ceph-0# ls -l /var/lib/ceph/osd/ceph-0/ > total 24 > lrwxrwxrwx 1 ceph ceph 93 Sep 15 14:58 block -> /dev/ceph-83bc8ca0-6016-42e5-a944-e42b5b91ffc0/osd-block-81d376be-36e8-46ca-837e-b3a65b445213 > -rw------- 1 ceph ceph 37 Sep 15 14:58 ceph_fsid > -rw------- 1 ceph ceph 37 Sep 15 14:58 fsid > -rw------- 1 ceph ceph 55 Sep 15 14:58 keyring > -rw------- 1 ceph ceph 6 Sep 15 14:58 ready > -rw------- 1 ceph ceph 10 Sep 15 14:58 type > -rw------- 1 ceph ceph 2 Sep 15 14:58 whoami > root@janky:/var/lib/ceph/osd/ceph-0# ls -l > total 24 > lrwxrwxrwx 1 ceph ceph 93 Sep 15 14:58 block -> /dev/ceph-83bc8ca0-6016-42e5-a944-e42b5b91ffc0/osd-block-81d376be-36e8-46ca-837e-b3a65b445213 > -rw------- 1 ceph ceph 37 Sep 15 14:58 ceph_fsid > -rw------- 1 ceph ceph 37 Sep 15 14:58 fsid > -rw------- 1 ceph ceph 55 Sep 15 14:58 keyring > -rw------- 1 ceph ceph 6 Sep 15 14:58 ready > -rw------- 1 ceph ceph 10 Sep 15 14:58 type > -rw------- 1 ceph ceph 2 Sep 15 14:58 whoami > root@janky:/var/lib/ceph/osd/ceph-0# ls -l /dev/ceph-83bc8ca0-6016-42e5-a944-e42b5b91ffc0/osd-block-81d376be-36e8-46ca-837e-b3a65b445213 > lrwxrwxrwx 1 root root 8 Sep 15 14:59 /dev/ceph-83bc8ca0-6016-42e5-a944-e42b5b91ffc0/osd-block-81d376be-36e8-46ca-837e-b3a65b445213 -> ../dm-10 > root@janky:/var/lib/ceph/osd/ceph-0# ls -l /dev/dm-10 > brw-rw---- 1 ceph ceph 253, 10 Sep 15 14:59 /dev/dm-10 > ``` > > I can read /dev/dm-10 fine, and there are no IO errors in dmesg. > > I tried running ceph-osd with debug mode, output can be seen at: > http://paste.debian.net/1211871/ > > Any ideas would be appreciated. I have sufficient redundancy to recover from > this but I would really like to know what happened here, so I'm leaving at > least one OSD around in this state for testing. > > Regards, > Davíð > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx