Aleksey, Were you able to try the vgscan suggestion? I am interested in following up with this to implement the fix in ceph-volume. Since Vasu created the tracker ticket, my comments there are probably not getting to you. Anything that you can corroborate will help here to attempt a fix. Thanks! On Tue, Apr 10, 2018 at 3:39 PM, Alfredo Deza <adeza@xxxxxxxxxx> wrote: > On Tue, Apr 10, 2018 at 3:26 PM, Vasu Kulkarni <vakulkar@xxxxxxxxxx> wrote: >> Yes I have filed a tracker for the issue: http://tracker.ceph.com/issues/23645 >> >> On Tue, Apr 10, 2018 at 9:57 AM, Aleksey Gutikov >> <aleksey.gutikov@xxxxxxxxxx> wrote: >>> Sata hdds, this happen on running server, without reboot. >>> Due to hardware problem, vibration, human factor, anything, sata host loose >>> connection with drive and /dev/sda disappears. Than operator unplug/plug it >>> and without lvm it can appear with same node /dev/sda and with different >>> one, it does not matrer - osd will be started. >>> But in case of lvm /dev/dm-0 holds lvm objects and sda node, so disk got >>> next letter (/dev/sdt for example), but lvm can't create lv with same uid, >>> so lsblk does not see logical volume on this disk > > This is still not very clear to me. You mention a plug/unplug of disks > that make the device path change, but > then that "lvm can't create lv with same uid". So this is before the > OSD is running? or where exactly in the process is this? > > In any case, you could just refresh LVM's cache by running: vgscan > > The docs explains this better: > >> LVM runs the vgscan command automatically at system startup and at other times during LVM operation, such as when you execute a vgcreate command or when LVM detects an inconsistency. You may need to run the vgscan > command manually when you change your hardware configuration, causing new devices to be visible to the system that were not present at system bootup. >> This may be necessary, for example, when you add new disks to the > system on a SAN or hotplug a new disk that has been labeled as a physical volume. > > If you run that, do you have issues still? > > If the problem goes away with vgscan, we could just add it to the unit > that activates/starts the OSD. > >>> Yes everything will be fixed after reboot, but I don't think it is a >>> solution. >>> >>> On Apr 10, 2018 6:56 PM, "Vasu Kulkarni" <vakulkar@xxxxxxxxxx> wrote: >>> >>> On Tue, Apr 10, 2018 at 7:19 AM, Aleksei Gutikov >>> <aleksey.gutikov@xxxxxxxxxx> wrote: >>>> >>>> Hi all, >>>> >>>> Previously with ceph-disk when hdd flaps there was udev rule starting >>>> "ceph-disk trigger" that was checking xfs partition with osd metadata and >>>> staring osd if metadata exists. >>>> >>>> Now with ceph-volume device mapper device (/dev/dm-8 for example) hold the >>>> whole tree of kernel objects including lvm lv, vg and pv and also block >>>> device itself so same disk appears with different letter and without lvm >>>> data (lsblk does not see lv on disk with different letter). >>>> >>>> And I have to perform a list of manual actions to start osd: >>>> >>>> - remove device mapper device: >>>> sudo dmsetup remove /dev/dm-8 >>>> >>>> - disable new block device and rescan scsi to make lvm volume appear: >>>> echo 1 | sudo tee /sys/block/sdb/device/delete >>>> echo "- - -" | sudo tee /sys/class/scsi_host/host0/scan >>>> >>>> - maybe umount osd direcroty (I'm not sure if it is required): >>>> sudo umount /var/lib/ceph/osd/ceph-12 >>>> >>>> - list osd disks to get lv name (osd fsid): >>>> sudo ceph-volume lvm list >>>> >>>> - And finally start osd: >>>> sudo ceph-volume lvm trigger 12-92b66a98-1c35-40a8-bf5b-ac123c366166 >>>> >>>> >>>> Is that expected behavior or a bug or I'm missing something? >>> >>> >>> What you are saying is a bug and you should file a tracker, >>> ceph-volume should handle the change internally and there is no need >>> for admin >>> to do above operation for any dev mapper name changes. Are these >>> external scsi devices ? >>> >>> >>>> >>>> Thanks >>>> >>>> -- >>>> >>>> Best regards, >>>> Aleksei Gutikov >>>> Software Engineer | synesis.ru | Minsk. BY >>>> -- >>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >>>> the body of a message to majordomo@xxxxxxxxxxxxxxx >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> >>> >> -- >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html