Recreate Destroyed OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

Sorry if it appears that I am reposting the same issue under a different
topic.  However, I feel that the problem has moved and I now have different
questions.

At this point I have, I believe, removed all traces of OSD.12 from my
cluster - based on steps in the Reef docs at
https://docs.ceph.com/en/reef/rados/operations/add-or-rm-osds/#.  I have
further located and removed the WAL/DB LV on an associated NVMe drive
(shared with 3 other OSDs).

I don't believe the instructions for replacing an OSD (ceph-volume lvm
prepare) still apply, so I have been trying to work with the instructions
under ADDING AN OSD (MANUAL).

However, since my installation is containerized (Podman), it is unclear
which steps should be issued on the host and which within 'cephadm shell'.

There is also another ambiguity:  In step 3 the instruction is to 'mkfs -t
{fstype}' and then to 'mount -o user_xattr'.  However, which fs type?

After this, in step 4, the 'ceph-osd -i {osd-id} --mkfs --mkkey' gets
throws errors about the keyring file.

So, are these the right instructions to be using in a containerized
installation?  Are there, in general, alternate documents for containerized
installations?

Lastly, the above cited instructions don't say anything about the separate
WAL/DB LV.

Please advise.

Thanks.

-Dave

--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux