Hi,
Indeed! It says "osd" for all the unmanaged OSDs. When I change it
to the name of my managed service and restart the daemon, it shows
up in ceph orch ps --service-name. I checked whether cephadm deploy
perhaps has an undocumented flag for setting the service name, but
couldn't find any. I could run deploy, change the service name and
then restart the service, but that's quite ugly. Any better ideas?
I tried to reproduce it somehow and tried to use the --config-json
option for cephadm deploy, but it wasn't applied:
--config-json='{"service_name": "osd.myservice"}'
In fact, the unit.meta file was almost empty:
quincy-1:~ # cat
/var/lib/ceph/1e6e5cb6-73e8-11ee-b195-fa163ee43e22/osd.3/unit.meta
{
"memory_request": null,
"memory_limit": null,
"ports": []
}
But I'm not sure if this has something to do with the way I try to
reproduce or something else. Unfortunately, I don't have a better idea
than to change the service name. Maybe one of the devs has a comment.
Thanks,
Eugen
Zitat von Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>:
I meant this one: https://tracker.ceph.com/issues/55395
Ah, alright, almost forgot about that one.
Is there an "unmanaged: true" statement in this output?
ceph orch ls osd --export
No, it only contains the managed services that I configured.
Just out of curiosity, is there a "service_name" in your unit.meta
for that OSD?
grep service_name /var/lib/ceph/{fsid}/osd.{id}/unit.meta
Indeed! It says "osd" for all the unmanaged OSDs. When I change it
to the name of my managed service and restart the daemon, it shows
up in ceph orch ps --service-name. I checked whether cephadm deploy
perhaps has an undocumented flag for setting the service name, but
couldn't find any. I could run deploy, change the service name and
then restart the service, but that's quite ugly. Any better ideas?
Janek
Zitat von Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>:
Hi Eugen,
I stopped one OSD (which was deployed by ceph orch before) and
this is what the MGR log says:
2023-11-09T13:35:36.941+0000 7f067f1f0700 0 [cephadm DEBUG
cephadm.services.osd] osd id 96 daemon already exists
Before and after that are JSON dumps of the LVM properties of all
OSDs. I get the same messages when I delete all files under
/var/lib/ceph/<FSID>/osd.96 and the OSD service symlink in
/etc/systemd/system/.
ceph cephadm osd activate --verbose only shows this:
[{'flags': 8,
'help': 'Start OSD containers for existing OSDs',
'module': 'mgr',
'perm': 'rw',
'sig': [argdesc(<class 'ceph_argparse.CephPrefix'>, req=True,
name=prefix, n=1, numseen=0, prefix=cephadm),
argdesc(<class 'ceph_argparse.CephPrefix'>, req=True,
name=prefix, n=1, numseen=0, prefix=osd),
argdesc(<class 'ceph_argparse.CephPrefix'>, req=True,
name=prefix, n=1, numseen=0, prefix=activate),
argdesc(<class 'ceph_argparse.CephString'>, req=True,
name=host, n=N, numseen=0)]}]
Submitting command: {'prefix': 'cephadm osd activate', 'host':
['XXX'], 'target': ('mon-mgr', '')}
submit {"prefix": "cephadm osd activate", "host": ["XXX"],
"target": ["mon-mgr", ""]} to mon-mgr
Created no osd(s) on host XXX; already created?
I suspect that it doesn't work for OSDs that are not explicitly
marked as managed by ceph orch. But how do I do that?
I also commented the tracker issue you referred to.
Which issue exactly do you mean?
Janek
Thanks,
Eugen
Zitat von Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx>:
Actually, ceph cephadm osd activate doesn't do what I expected
it to do. It seems to be looking for new OSDs to create instead
of looking for existing OSDs to activate. Hence, it does nothing
on my hosts and only prints 'Created no osd(s) on host XXX;
already created?' So this wouldn't be an option either, even if
I were willing to deploy the admin key on the OSD hosts.
On 07/11/2023 11:41, Janek Bevendorff wrote:
Hi,
We have our cluster RAM-booted, so we start from a clean slate
after every reboot. That means I need to redeploy all OSD
daemons as well. At the moment, I run cephadm deploy via Salt
on the rebooted node, which brings the deployed OSDs back up,
but the problem with this is that the deployed OSD shows up as
'unmanaged' in ceph orch ps afterwards.
I could simply skip the cephadm call and wait for the Ceph
orchestrator to reconcile and auto-activate the disks, but that
can take up to 15 minutes, which is unacceptable. Running ceph
cephadm osd activate is not an option either, since I don't
have the admin keyring deployed on the OSD hosts (I could do
that, but I don't want to).
How can I manually activate the OSDs after a reboot and hand
over control to the Ceph orchestrator afterwards? I checked the
deployments in /var/lib/ceph/<FSID>, but the only difference I
found between my manual cephadm deployment and what ceph orch
does is that the device links to /dev/mapper/ceph--... instead
of /dev/ceph-...
Any hints appreciated!
Janek
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx