Hi, With 15.2.8, run "ceph orch rm osd 12 --replace --force", PGs on osd.12 are remapped, osd.12 is removed from "ceph osd tree", the daemon is removed from "ceph orch ps", the device is "available" in "ceph orch device ls". Everything seems good at this point. Then dry-run service spec. ``` # cat osd-spec.yaml service_type: osd service_id: osd-spec placement: hosts: - ceph-osd-1 data_devices: rotational: 1 db_devices: rotational: 0 # ceph orch apply osd -i osd-spec.yaml --dry-run +---------+----------+------------+----------+----------+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL | +---------+----------+------------+----------+----------+-----+ |osd |osd-spec |ceph-osd-3 |/dev/sdd |/dev/sdb |- | +---------+----------+------------+----------+----------+-----+ ``` It looks as expected. Then "ceph orch apply osd -i osd-spec.yaml". Here is the log of cephadm. ``` /bin/docker:stderr --> relative data size: 1.0 /bin/docker:stderr --> passed block_db devices: 1 physical, 0 LVM /bin/docker:stderr Running command: /usr/bin/ceph-authtool --gen-print-key /bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json /bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b05c3c90-b7d5-4f13-8a58-f72761c1971b 12 /bin/docker:stderr Running command: /usr/sbin/vgcreate --force --yes ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64 /dev/sdd /bin/docker:stderr stdout: Physical volume "/dev/sdd" successfully created. /bin/docker:stderr stdout: Volume group "ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64" successfully created /bin/docker:stderr Running command: /usr/sbin/lvcreate --yes -l 572318 -n osd-block-b05c3c90-b7d5-4f13-8a58-f72761c1971b ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64 /bin/docker:stderr stderr: Volume group "ceph-a3886f74-3de9-4e6e-a983-8330eda0bd64" has insufficient free space (572317 extents): 572318 required. /bin/docker:stderr --> Was unable to complete a new OSD, will rollback changes ``` Q1, why VG name (ceph-<id>) is different from others (ceph-block-<id>)? Q2, where is that 572318 from? Since all HDDs are the same model, VG "Total PE" of all HDDs is 572317. Has anyone seen similar issues? Anything I am missing? Thanks! Tony _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx