That's not how it's supposed to work. I tried the same on an Octopus
cluster and removed all filters except:
data_devices:
rotational: 1
db_devices:
rotational: 0
My Octopus test osd nodes have two HDDs and one SSD, I removed all
OSDs and redeployed on one node. This spec file results in three
standalone OSDs! Without the other filters this won't work as
expected, it seems. I'll try again on Pacific with the same test and
see where that goes.
Zitat von Kai Stian Olstad <ceph+list@xxxxxxxxxx>:
On 26.05.2021 22:14, David Orman wrote:
We've found that after doing the osd rm, you can use: "ceph-volume lvm
zap --osd-id 178 --destroy" on the server with that OSD as per:
https://docs.ceph.com/en/latest/ceph-volume/lvm/zap/#removing-devices
and it will clean things up so they work as expected.
With the help of Eugen I did run "cephadm ceph-volume lvm zap
--destroy <LV>" and the LV is gone.
I think that is the same result as "ceph-volume lvm zap --osd-id 178
--destroy" would give me?
I now have 357GB free space on the VG, but Cephadm doesn't find and
use this space.
Above it the result of the zap command and it show the LV is deleted.
$ sudo cephadm ceph-volume lvm zap --destroy
/dev/ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b/osd-block-db-449bd001-eb32-46de-ab80-a1cbcd293d69
INFO:cephadm:Inferring fsid 3614abcc-201c-11eb-995a-2794bcc75ae0
INFO:cephadm:Using recent ceph image ceph:v15.2.9
INFO:cephadm:/usr/bin/podman:stderr --> Zapping:
/dev/ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b/osd-block-db-449bd001-eb32-46de-ab80-a1cbcd293d69
INFO:cephadm:/usr/bin/podman:stderr Running command: /usr/bin/dd
if=/dev/zero
of=/dev/ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b/osd-block-db-449bd001-eb32-46de-ab80-a1cbcd293d69 bs=1M count=10
conv=fsync
INFO:cephadm:/usr/bin/podman:stderr stderr: 10+0 records in
INFO:cephadm:/usr/bin/podman:stderr 10+0 records out
INFO:cephadm:/usr/bin/podman:stderr stderr: 10485760 bytes (10 MB,
10 MiB) copied, 0.0195532 s, 536 MB/s
INFO:cephadm:/usr/bin/podman:stderr --> More than 1 LV left in VG,
will proceed to destroy LV only
INFO:cephadm:/usr/bin/podman:stderr --> Removing LV because
--destroy was given:
/dev/ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b/osd-block-db-449bd001-eb32-46de-ab80-a1cbcd293d69
INFO:cephadm:/usr/bin/podman:stderr Running command:
/usr/sbin/lvremove -v -f
/dev/ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b/osd-block-db-449bd001-eb32-46de-ab80-a1cbcd293d69
INFO:cephadm:/usr/bin/podman:stderr stdout: Logical volume
"osd-block-db-449bd001-eb32-46de-ab80-a1cbcd293d69" successfully
removed
INFO:cephadm:/usr/bin/podman:stderr stderr: Removing
ceph--block--dbs--563432b7--f52d--4cfe--b952--11542594843b-osd--block--db--449bd001--eb32--46de--ab80--a1cbcd293d69
(253:3)
INFO:cephadm:/usr/bin/podman:stderr stderr: Archiving volume group
"ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b" metadata
(seqno 61).
INFO:cephadm:/usr/bin/podman:stderr stderr: Releasing logical
volume "osd-block-db-449bd001-eb32-46de-ab80-a1cbcd293d69"
INFO:cephadm:/usr/bin/podman:stderr stderr: Creating volume group
backup
"/etc/lvm/backup/ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b" (seqno
62).
INFO:cephadm:/usr/bin/podman:stderr --> Zapping successful for: <LV:
/dev/ceph-block-dbs-563432b7-f52d-4cfe-b952-11542594843b/osd-block-db-449bd001-eb32-46de-ab80-a1cbcd293d69>
--
Kai Stian Olstad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx