Hello, I had an OSD drop out a couple days ago. This is 14.2.16, Bluestore, HDD + NVMe, non-container. The HDD sort of went away. I powered down the node, reseated the drive, and it came back. However, the OSD won't start. Systemctl --failed shows that the lvm2 pvscan failed, preventing the OSD unit from starting. Running the pvscan activate command manually with with verbose gave 'device-mapper: reload ioctl on (253:7) failed: Read-only file system'. I have been looking at this for a while, but I can't figure out what is read-only that is causing the problem. The full output of the pvscan is: # pvscan --cache --activate ay --verbose '8:48' pvscan devices on command line. activation/auto_activation_volume_list configuration setting not defined: All logical volumes will be auto-activated. Activating logical volume ceph-block-b1fea172-71a4-463e-a3e3-8cdcc1bc7b79/osd-block-425faf92-449e-4b57-98f2-a90a7f60e2a4. activation/volume_list configuration setting not defined: Checking only host tags for ceph-block-b1fea172-71a4-463e-a3e3-8cdcc1bc7b79/osd-block-425faf92-449e-4b57-98f2-a90a7f60e2a4. Creating ceph--block--b1fea172--71a4--463e--a3e3--8cdcc1bc7b79-osd--block--425faf92--449e--4b57--98f2--a90a7f60e2a4 Loading table for ceph--block--b1fea172--71a4--463e--a3e3--8cdcc1bc7b79-osd--block--425faf92--449e--4b57--98f2--a90a7f60e2a4 (253:7). device-mapper: reload ioctl on (253:7) failed: Read-only file system Removing ceph--block--b1fea172--71a4--463e--a3e3--8cdcc1bc7b79-osd--block--425faf92--449e--4b57--98f2--a90a7f60e2a4 (253:7) Activated 0 logical volumes in volume group ceph-block-b1fea172-71a4-463e-a3e3-8cdcc1bc7b79. 0 logical volume(s) in volume group "ceph-block-b1fea172-71a4-463e-a3e3-8cdcc1bc7b79" now active ceph-block-b1fea172-71a4-463e-a3e3-8cdcc1bc7b79: autoactivation failed. -Dave -- Dave Hall Binghamton University kdhall@xxxxxxxxxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx