Re: Accidentally created systemd units for OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If it makes you feel better, that sounds exactly like what happened to
me and I have no idea how. Other than I'd started with Octopus and it
was a transitional release, there are conflicting instructions AND a
reference in the Octopus docs to procedures using a tool that was no
longer distributed with Octopus.

If by "templated" you mean what I think you mean, the cleanup I'd
recommend is as follows:

1. Follow documented processed for draining and deleting an OSD (the
dashboard seems to handle this well).

2. Make sure that your non-container service is shut down.

3. Also make sure that your OSD is completely removed from the "osd
tree" list. That may require a manual reweighting to 0 which I did from
the command line. My thanks to EEugen Block for that assist.

4. At this point, hopefully a "systemctl status" won't show either the
old-style or container-style OSD services as active. In which case, you
may manually delete the specific OSD unit file from /etc/systemd/system
and it probably won't hurt if you delete the template, but I'd
recommend leaving it, since it will just come back soon.

5. finally, completely erase the OSD directory under /var/lib/ceph and
its counterpart under /var/lib/ceph/<fsid>, If they're already gone, so
much the better.

The template OSD is used by "ceph control" to dynamically generate the
systemd unit file under /run/ceph/<fsid> when it detects an OSD
definition. Which is probably keyed by finding a resource under
/var/lib/ceph/<fsid>/osd.x, if not actually the OSD directory itself.

Unlike /etc/systemd and /etc/systemd, the /run/systemd is effectively
destroyed when the system is down and its ceph files aren't permanent.
Manually erasing them thus has no benefit.

So, in other words, the templated OSD systemd unit won't get created
unless the system actually has cephadm OSDs defined and your problem
vanishes when the OSDs do!

If it's of any help, I've a quick-and-dirty way to spawn minimal VMs
and OSDs you can pull from https://gogs.mousetech.com. It has 2 parts.
One to construct a VM, the other is an Ansible playbooks for installing
the necessary cephadm infrastructure on a new machine, Which should
work on bare OS's as well, as far as I can see. Though I've not tested
that. Customize and/or plunder for whatever benefits it might give you.

    Tim

On Fri, 2024-08-16 at 19:24 +0000, Dan O'Brien wrote:
> I am 100% using cephadm and containers and plan to continue to do so.
> 
> Our original setup was all spinners, but after going to Ceph Days
> NYC, I pushed for SSDs to use for the WAL/RocksDb and I'm in the
> process of migrating the WAL/RocksDb. In general, it's been fairly
> straightforward -- IF YOU FOLLOW THE DIRECTIONS EXACTLY. From what I
> can tell (and someone please correct me if I'm wrong), it appears
> that I've just introduced a bit of cruft into systemd that duplicates
> the configuration of the container OSD. If I can get rid of that bit
> of bellybutton-lint, I think I'm OK without rebuilding the OSD. (At
> least until I screw it up again on one of the remaining OSDs I need
> to migrate).
> 
> Anyone know how to get rid of an instance of a templated systemd
> unit? PLEEEEEEEEEEZE?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux