Re: Redeploy ceph orch OSDs after reboot, but don't mark as 'unmanaged'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Janek,

I don't really have a solution, but I tend to disagree that 'ceph cephadm osd activate' looks for OSDs to create. It's specifically stated in the docs that it's for existing OSDs to activate, and it did work in my test environment. I also commented the tracker issue you referred to. So as I see it the question would be why it doesn't activate OSDs. And what it does differently when you deploy them via cephadm. Do you have the cephadm.log and the mgr log from the 'ceph cephadm osd activate' call?

Thanks,
Eugen

Zitat von Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>:

Actually, ceph cephadm osd activate doesn't do what I expected it to do. It  seems to be looking for new OSDs to create instead of looking for existing OSDs to activate. Hence, it does nothing on my hosts and only prints 'Created no osd(s) on host XXX; already created?' So this wouldn't be an option either, even if I were willing to deploy the admin key on the OSD hosts.


On 07/11/2023 11:41, Janek Bevendorff wrote:
Hi,

We have our cluster RAM-booted, so we start from a clean slate after every reboot. That means I need to redeploy all OSD daemons as well. At the moment, I run cephadm deploy via Salt on the rebooted node, which brings the deployed OSDs back up, but the problem with this is that the deployed OSD shows up as 'unmanaged' in ceph orch ps afterwards.

I could simply skip the cephadm call and wait for the Ceph orchestrator to reconcile and auto-activate the disks, but that can take up to 15 minutes, which is unacceptable. Running ceph cephadm osd activate is not an option either, since I don't have the admin keyring deployed on the OSD hosts (I could do that, but I don't want to).

How can I manually activate the OSDs after a reboot and hand over control to the Ceph orchestrator afterwards? I checked the deployments in /var/lib/ceph/<FSID>, but the only difference I found between my manual cephadm deployment and what ceph orch does is that the device links to /dev/mapper/ceph--... instead of /dev/ceph-...

Any hints appreciated!

Janek


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux