Re: Found unknown daemon type ceph-exporter on host after upgrade to 17.2.7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Found some time and the following cleared most references to ceph-exporter. There are only some files left in /var/lib/containers/storage/overlay/. 

HOSTSTOCLEAN="host1
host2
host3"

for HOST in ${HOSTSTOCLEAN}; do
	ssh -o StrictHostKeyChecking=accept-new ${HOST} '
	sudo systemctl disable ceph-<cluster_id>@ceph-exporter.$(hostname -s).service; \
	sudo systemctl stop ceph-<cluster_id>@ceph-exporter.$(hostname -s).service; \
	sudo rm -rf /var/lib/ceph/<cluster_id>/ceph-exporter.$(hostname -s); \
	sudo rm -rf /var/lib/ceph/<cluster_id>/custom_config_files/ceph-exporter.$(hostname -s); \
	sudo podman system prune -a -f; \
	sudo podman volume prune -f; \
	'
done

> Op 08-11-2023 18:13 CET schreef Sake Ceph <ceph@xxxxxxxxxxx>:
> 
>  
> Unfortunately I also can't test this tomorrow, Friday it will be the first thing I will do. 
> 
> Best regards, 
> Sake
> 
> > Op 08-11-2023 17:08 CET schreef Dmitry Melekhov <dm@xxxxxxxxxx>:
> > 
> >  
> > Sake, could you, please, try this and inform me about result?
> > 
> > Unfortunately I will not able to do this tomorrow :-(
> > 
> > 
> > 08.11.2023 19:52, Adam King пишет:
> > > Seems you're right. The normal code to remove stray daemons doesn't seem to
> > > work with unknown types. For cleanup, I found on each node that this worked
> > >
> > > [root@vm-01 ~]# cephadm ls --no-detail | grep ceph-exporter
> > >          "name": "ceph-exporter.vm-01",
> > >          "systemd_unit":
> > > "ceph-7875b090-7e49-11ee-a5a9-525400a17001@xxxxxxxxxxxxxxxx-01"
> > > [root@vm-01 ~]#
> > > [root@vm-01 ~]# systemctl stop
> > > ceph-7875b090-7e49-11ee-a5a9-525400a17001@xxxxxxxxxxxxxxxx-01
> > > [root@vm-01 ~]#
> > > [root@vm-01 ~]# rm -rf
> > > /var/lib/ceph/7875b090-7e49-11ee-a5a9-525400a17001/ceph-exporter.*
> > >
> > > so essentially stopping the systemd unit and then removing the daemon dir
> > > in /var/lib/ceph/<fsid>/. Removing the dir in /var/lib/ceph/<fsid>/ removes
> > > the warning in the log and stopping the systemd unit removes the actual
> > > container process.
> > 
> > I guess we also have to remove to disable daemon by systemctl and remove 
> > symlink in /etc/systemd ...
> > 
> > 
> > > On Wed, Nov 8, 2023 at 8:51 AM Sake <ceph@xxxxxxxxxxx> wrote:
> > >
> > >> Hi Adam,
> > >>
> > >> This also happens on our cluster where ceph-exporter got deployed. Cephadm
> > >> isn't removing any daemons and the logs are getting spammed about this :(
> > >>
> > >> What actions do we need to take to completely remove ceph-exporter
> > >> daemons?
> > >>
> > >> Best regards,
> > >> Sake
> > >>
> > >>
> > >> On 7 Nov 2023 19:51, Adam King <adking@xxxxxxxxxx> wrote:
> > >>
> > >> Sorry, there ended up being an issue with the ceph-exporter when it was
> > >> backported to quincy so it was removed for 17.2.7 . You should be able to
> > >> do `ceph orch rm ceph-exporter` if there is a "ceph-exporter" service in
> > >> "ceph orch ls" output. If it's not in the "ceph orch ls" output, I believe
> > >> cephadm will just remove the ceph-exporter daemons on its own and the
> > >> warnings will clear out. It shouldn't cause any issues with the rest of
> > >> the
> > >> cluster I believe.
> > >>
> > >> On Tue, Nov 7, 2023 at 8:10 AM Dmitry Melekhov <dm@xxxxxxxxxx> wrote:
> > >>
> > >>> Hello!
> > >>>
> > >>> I see
> > >>>
> > >>> [WRN] Found unknown daemon type ceph-exporter on host
> > >>>
> > >>> for all 3 ceph servers in logs, after upgrade to 17.2.7 from 17.2.6 in
> > >>> dashboard
> > >>>
> > >>> and
> > >>>
> > >>> cephadm ['--image',
> > >>> '
> > >>>
> > >> quay.io/ceph/ceph@sha256:1fcdbead4709a7182047f8ff9726e0f17b0b209aaa6656c5c8b2339b818e70bb',
> > >>
> > >>> '--timeout', '895', 'ls']
> > >>> 2023-11-07 16:15:37,531 7fddb699b740 WARNING version for unknown daemon
> > >>> type ceph-exporter
> > >>>
> > >>> in
> > >>>
> > >>> cephadm.log
> > >>>
> > >>>
> > >>> There were no such messages before.
> > >>>
> > >>> I guess this will not create any problem, but is there any way to fix
> > >> this?
> > >>>
> > >>> _______________________________________________
> > >>> ceph-users mailing list -- ceph-users@xxxxxxx
> > >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >>>
> > >>>
> > >> _______________________________________________
> > >> ceph-users mailing list -- ceph-users@xxxxxxx
> > >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >>
> > >>
> > >> _______________________________________________
> > >> ceph-users mailing list -- ceph-users@xxxxxxx
> > >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >>
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux