Hi,
On 03.08.2017 16:31, c.monty@xxxxxx wrote:
Hello!
I have purged my ceph and reinstalled it.
ceph-deploy purge node1 node2 node3
ceph-deploy purgedata node1 node2 node3
ceph-deploy forgetkeys
All disks configured as OSDs are physically in two servers.
Due to some restrictions I needed to modify the total number of disks usable as OSD, this means I have now less disks as before.
The installation with ceph-deploy finished w/o errors.
However, if I start all OSDs (on any of the servers) I get some services with status "failed".
ceph-osd@70.service loaded failed failed Ceph object storage daemon
ceph-osd@71.service loaded failed failed Ceph object storage daemon
ceph-osd@92.service loaded failed failed Ceph object storage daemon
ceph-osd@93.service loaded failed failed Ceph object storage daemon
ceph-osd@94.service loaded failed failed Ceph object storage daemon
ceph-osd@95.service loaded failed failed Ceph object storage daemon
ceph-osd@96.service loaded failed failed Ceph object storage daemon
Any of these services belong to the previous installation.
If I stop any of the failed service and disable it, e.g.
systemctl stop ceph-osd@70.service
systemctl disable ceph-osd@70.service
the status is correct.
However, when I trigger
systemctl restart ceph-osd.target
these zombie services get in status "auto-restart" first and then "fail" again.
As a workaround I need to mask the zombie services, but this should not be a final solution: systemctl mask ceph-osd@70.service
Question:
How can I get rid of the zombie services "ceph-osd@xx.service"?
If you are sure that these OSD are "zombie", you can remove the
dependencies for ceph-osd.target. In case of CentOS, these are symlinks
in /etc/systemd/system/ceph-osd.target.wants/ .
Do not forget to reload systemd afterwards. There might also be a nice
systemctl command for removing dependencies.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com