Hi,
the preferred method to deploy OSDs in cephadm managed clusters are
spec files, see this part of the docs [0] for more information. I
would just not use the '--all-available-devices' flag, except in test
clusters, or if you're really sure that this is what you want.
If you use 'ceph orch daemon add osd ...', you'll end up with one (or
more) OSD(s), but they will be unmanaged, as you already noted in your
own cluster. There are a couple of examples with advanced specs (e. g.
DB/WAL on dedicated devices) in the docs as well [1]. So my
recommendation would be to have a suiting spec file for your disk
layout. You can always check with the '--dry-run' flag before actually
applying it:
ceph orch apply -i osd-spec.yaml --dry-run
Regards,
Eugen
[0] https://docs.ceph.com/en/latest/cephadm/services/osd/#deploy-osds
[1]
https://docs.ceph.com/en/latest/cephadm/services/osd/#advanced-osd-service-specifications
Zitat von Tim Holloway <timh@xxxxxxxxxxxxx>:
As I understand it, the manual OSD setup is only for legacy
(non-container) OSDs. Directory locations are wrong for managed
(containerized) OSDs, for one.
Actually, the whole manual setup docs ought to be moved out of the
mainline documentation. In their present arrangement, they make
legacy setup sound like the preferred method. And have you noticed
that there is no corresponding well-marked section titled
"Authomated (cephadmin) setup?".
This is how we end up with OSDs that are simultaneously legacy AND
administered for the same OSD, since at last count there are no
interlocks within Ceph to prevent such a mess.
Tim
On 10/31/24 13:39, Dave Hall wrote:
Hello.
Sorry if it appears that I am reposting the same issue under a different
topic. However, I feel that the problem has moved and I now have different
questions.
At this point I have, I believe, removed all traces of OSD.12 from my
cluster - based on steps in the Reef docs at
https://docs.ceph.com/en/reef/rados/operations/add-or-rm-osds/#. I have
further located and removed the WAL/DB LV on an associated NVMe drive
(shared with 3 other OSDs).
I don't believe the instructions for replacing an OSD (ceph-volume lvm
prepare) still apply, so I have been trying to work with the instructions
under ADDING AN OSD (MANUAL).
However, since my installation is containerized (Podman), it is unclear
which steps should be issued on the host and which within 'cephadm shell'.
There is also another ambiguity: In step 3 the instruction is to 'mkfs -t
{fstype}' and then to 'mount -o user_xattr'. However, which fs type?
After this, in step 4, the 'ceph-osd -i {osd-id} --mkfs --mkkey' gets
throws errors about the keyring file.
So, are these the right instructions to be using in a containerized
installation? Are there, in general, alternate documents for containerized
installations?
Lastly, the above cited instructions don't say anything about the separate
WAL/DB LV.
Please advise.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx