Thank you gentlemen. I will give this a shot and reply with what worked. On Jul 19, 2019, at 11:11 AM, Tarek Zegar <tzegar@xxxxxxxxxx<mailto:tzegar@xxxxxxxxxx>> wrote: On the host with the osd run: ceph-volume lvm list <graycol.gif>"☣Adam" ---07/18/2019 03:25:05 PM---The block device can be found in /var/lib/ceph/osd/ceph-$ID/block # ls -l /var/lib/ceph/osd/ceph-9/b From: "☣Adam" <adam@xxxxxxxxx<mailto:adam@xxxxxxxxx>> To: ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx> Date: 07/18/2019 03:25 PM Subject: [EXTERNAL] Re: [ceph-users] Need to replace OSD. How do I find physical disk Sent by: "ceph-users" <ceph-users-bounces@xxxxxxxxxxxxxx<mailto:ceph-users-bounces@xxxxxxxxxxxxxx>> ________________________________ The block device can be found in /var/lib/ceph/osd/ceph-$ID/block # ls -l /var/lib/ceph/osd/ceph-9/block In my case it links to /dev/sdbvg/sdb which makes is pretty obvious which drive this is, but the Volume Group and Logical volume could be named anything. To see what physical disk(s) make up this volume group use lvblk (as Reed suggested) # lvblk If that drive needs to be located in a computer with many drives, smartctl should be able to be used to pull the make, model, and serial number # smartctl -i /dev/sdb I was not aware of ceph-volume, or `ceph-disk list` (which is apparently now deprecated in favor of ceph-volume), so thank you to all in this thread for teaching about alternative (arguably more proper) ways of doing this. :-) On 7/18/19 12:58 PM, Pelletier, Robert wrote: > How do I find the physical disk in a Ceph luminous cluster in order to > replace it. Osd.9 is down in my cluster which resides on ceph-osd1 host. > > > > If I run lsblk -io KNAME,TYPE,SIZE,MODEL,SERIAL I can get the serial > numbers of all the physical disks for example > > sdb disk 1.8T ST2000DM001-1CH1 Z1E5VLRG > > > > But how do I find out which osd is mapped to sdb and so on? > > When I run df –h I get this > > > > [root@ceph-osd1 ~]# df -h > > Filesystem Size Used Avail Use% Mounted on > > /dev/mapper/ceph--osd1-root 19G 1.9G 17G 10% / > > devtmpfs 48G 0 48G 0% /dev > > tmpfs 48G 0 48G 0% /dev/shm > > tmpfs 48G 9.3M 48G 1% /run > > tmpfs 48G 0 48G 0% /sys/fs/cgroup > > /dev/sda3 947M 232M 716M 25% /boot > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-2 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-5 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-0 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-8 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-7 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-33 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-10 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-1 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-38 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-4 > > tmpfs 48G 24K 48G 1% /var/lib/ceph/osd/ceph-6 > > tmpfs 9.5G 0 9.5G 0% /run/user/0 > > > > > > *Robert Pelletier, **IT and Security Specialist*** > > Eastern Maine Community College > (207) 974-4782 | 354 Hogan Rd., Bangor, ME 04401 > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com