Re: Redeploy ceph orch OSDs after reboot, but don't mark as 'unmanaged'

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I meant this one: https://tracker.ceph.com/issues/55395
Is there an "unmanaged: true" statement in this output?

ceph orch ls osd --export

Just out of curiosity, is there a "service_name" in your unit.meta for that OSD?

grep service_name /var/lib/ceph/{fsid}/osd.{id}/unit.meta


Zitat von Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>:

Hi Eugen,

I stopped one OSD (which was deployed by ceph orch before) and this is what the MGR log says:

2023-11-09T13:35:36.941+0000 7f067f1f0700  0 [cephadm DEBUG cephadm.services.osd] osd id 96 daemon already exists

Before and after that are JSON dumps of the LVM properties of all OSDs. I get the same messages when I delete all files under /var/lib/ceph/<FSID>/osd.96 and the OSD service symlink in /etc/systemd/system/.

ceph cephadm osd activate --verbose only shows this:

[{'flags': 8,
  'help': 'Start OSD containers for existing OSDs',
  'module': 'mgr',
  'perm': 'rw',
  'sig': [argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=cephadm),           argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=osd),           argdesc(<class 'ceph_argparse.CephPrefix'>, req=True, name=prefix, n=1, numseen=0, prefix=activate),           argdesc(<class 'ceph_argparse.CephString'>, req=True, name=host, n=N, numseen=0)]}] Submitting command:  {'prefix': 'cephadm osd activate', 'host': ['XXX'], 'target': ('mon-mgr', '')} submit {"prefix": "cephadm osd activate", "host": ["XXX"], "target": ["mon-mgr", ""]} to mon-mgr
Created no osd(s) on host XXX; already created?

I suspect that it doesn't work for OSDs that are not explicitly marked as managed by ceph orch. But how do I do that?

I also commented the tracker issue you referred to.

Which issue exactly do you mean?

Janek



Thanks,
Eugen

Zitat von Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>:

Actually, ceph cephadm osd activate doesn't do what I expected it to do. It  seems to be looking for new OSDs to create instead of looking for existing OSDs to activate. Hence, it does nothing on my hosts and only prints 'Created no osd(s) on host XXX; already created?' So this wouldn't be an option either, even if I were willing to deploy the admin key on the OSD hosts.


On 07/11/2023 11:41, Janek Bevendorff wrote:
Hi,

We have our cluster RAM-booted, so we start from a clean slate after every reboot. That means I need to redeploy all OSD daemons as well. At the moment, I run cephadm deploy via Salt on the rebooted node, which brings the deployed OSDs back up, but the problem with this is that the deployed OSD shows up as 'unmanaged' in ceph orch ps afterwards.

I could simply skip the cephadm call and wait for the Ceph orchestrator to reconcile and auto-activate the disks, but that can take up to 15 minutes, which is unacceptable. Running ceph cephadm osd activate is not an option either, since I don't have the admin keyring deployed on the OSD hosts (I could do that, but I don't want to).

How can I manually activate the OSDs after a reboot and hand over control to the Ceph orchestrator afterwards? I checked the deployments in /var/lib/ceph/<FSID>, but the only difference I found between my manual cephadm deployment and what ceph orch does is that the device links to /dev/mapper/ceph--... instead of /dev/ceph-...

Any hints appreciated!

Janek


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Bauhaus-Universität Weimar
Bauhausstr. 9a, R308
99423 Weimar, Germany

Phone: +49 3643 58 3577
www.webis.de


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux