Hello,
I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD daemons failed to deploy with ceph-deploy. The reason for failing is unimportant at this point, I believe it was race condition, as I was running ceph-deploy inside while loop for all disks in this server.
I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD daemons failed to deploy with ceph-deploy. The reason for failing is unimportant at this point, I believe it was race condition, as I was running ceph-deploy inside while loop for all disks in this server.
Now I have two left over LVM dmcrypded volumes that I am not sure how clean up. The command that failed and did not quite clean up after itself was:
ceph-deploy osd create --bluestore --dmcrypt --data /dev/sdd --block-db osvg/sdd-db ${SERVERNAME}
# lsblk
.......
sdd 8:48 0 7.3T 0 disk
└─ceph--f4efa78f--a467--4214--b550--81653da1c9bd-osd--block--097d59be--bbe6--493a--b785--48b259d2ff35
└─ceph--f4efa78f--a467--4214--b550--81653da1c9bd-osd--block--097d59be--bbe6--493a--b785--48b259d2ff35
253:32 0 7.3T 0 lvm
└─AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr 253:33 0 7.3T 0 crypt
└─AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr 253:33 0 7.3T 0 crypt
sds 65:32 0 223.5G 0 disk
├─sds1 65:33 0 512M 0 part /boot
└─sds2 65:34 0 223G 0 part
.......
├─osvg-sdd--db 253:8 0 8G 0 lvm
│ └─2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz 253:34 0 8G 0 crypt
├─sds1 65:33 0 512M 0 part /boot
└─sds2 65:34 0 223G 0 part
.......
├─osvg-sdd--db 253:8 0 8G 0 lvm
│ └─2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz 253:34 0 8G 0 crypt
# ceph-volume inventory /dev/sdd
====== Device report /dev/sdd ======
available False
rejected reasons locked
path /dev/sdd
scheduler mode deadline
rotational 1
vendor SEAGATE
human readable size 7.28 TB
sas address 0x5000c500a6b1d581
removable 0
model ST8000NM0185
ro 0
--- Logical Volume ---
cluster name ceph
name osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
osd id 39
cluster fsid 8e7a3953-7647-4133-9b9a-7f4a2e2b7da7
type block
block uuid AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
osd fsid 097d59be-bbe6-493a-b785-48b259d2ff35
available False
rejected reasons locked
path /dev/sdd
scheduler mode deadline
rotational 1
vendor SEAGATE
human readable size 7.28 TB
sas address 0x5000c500a6b1d581
removable 0
model ST8000NM0185
ro 0
--- Logical Volume ---
cluster name ceph
name osd-block-097d59be-bbe6-493a-b785-48b259d2ff35
osd id 39
cluster fsid 8e7a3953-7647-4133-9b9a-7f4a2e2b7da7
type block
block uuid AeV0iG-odWF-NRPE-1bVK-0mxH-OgHL-fneTzr
osd fsid 097d59be-bbe6-493a-b785-48b259d2ff35
I was trying to run
ceph-volume lvm zap --destroy /dev/sdd but it errored out. Osd id on this volume is the same as on next drive, /dev/sde, and osd.39 daemon is running. This command was trying to zap running osd.
What is the proper way to clean both data and block db volumes, so I can rerun ceph-deploy again, and add them to the pool?
Thank you!
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com