ceph-volume unfortunately doesn't handle completely hanging IOs too well compared to ceph-disk. It needs to read actual data from each disk and it'll just hang completely if any of the disks doesn't respond. The low-level command to get the information from LVM is: lvs -o lv_tags this allows you to map a LV to an OSD id. Paul Am Mo., 8. Okt. 2018 um 12:09 Uhr schrieb Kevin Olbrich <ko@xxxxxxx>: > > Hi! > > Yes, thank you. At least on one node this works, the other node just freezes but this might by caused by a bad disk that I try to find. > > Kevin > > Am Mo., 8. Okt. 2018 um 12:07 Uhr schrieb Wido den Hollander <wido@xxxxxxxx>: >> >> Hi, >> >> $ ceph-volume lvm list >> >> Does that work for you? >> >> Wido >> >> On 10/08/2018 12:01 PM, Kevin Olbrich wrote: >> > Hi! >> > >> > Is there an easy way to find raw disks (eg. sdd/sdd1) by OSD id? >> > Before I migrated from filestore with simple-mode to bluestore with lvm, >> > I was able to find the raw disk with "df". >> > Now, I need to go from LVM LV to PV to disk every time I need to >> > check/smartctl a disk. >> > >> > Kevin >> > >> > >> > _______________________________________________ >> > ceph-users mailing list >> > ceph-users@xxxxxxxxxxxxxx >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com