I would try it with a spec file that contains a path to the partition
(limit the placement to that host only). Or have you tried it already?
I don’t use partitions for ceph, but there have been threads from
other users who use partitions and with spec files it seemed to work.
You can generate a preview with ‚ceph orch apply -i osd-spec.yaml --dry-run‘.
Zitat von Herbert Faleiros <faleiros@xxxxxxxxx>:
I am on a journey, so far successful, to update our clusters to supported
versions. I started with Luminous and Ubuntu 16.04, and now we are on Reef
with Ubuntu 20.04. We still have more updates to do, but at the moment, I
encountered an issue with an OSD, and it was necessary to replace a disk.
Since the cluster was adopted, I'm not entirely sure what the best way to
replace this OSD is, as with cephadm, it doesn't like when the path for the
device is a partition. I could recreate the OSD using traditional methods
and then adopt the OSD, but that doesn't seem like the best approach. Does
anyone know how I should proceed to recreate this OSD? I had the same
problem in my lab, where I am already on Quincy.
What I am trying to do is:
# ceph orch osd rm 6 --replace --zap
# ceph orch daemon add osd
osd-nodeXXX:data_devices=/dev/sdb1,db_devices=/dev/sda3
The error it gives is:
/usr/bin/docker: stderr ceph-volume lvm batch: error: /dev/sdb1 is a
partition, please pass LVs or raw block devices
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx