Hi,
sometimes it helps to fail the MGR service, I just had this with a
customer last week where we had to fail it twice within a few hours
because the information was not updated. It was on latest Octopus.
ceph mgr fail
As for the MTU mismatch I believe there was a thread a few weeks ago,
but I don't a link at hand. I also can't remember if there was a
solution.
Zitat von Zakhar Kirpichenko <zakhar@xxxxxxxxx>:
Hi,
I seem to have some stale monitoring alerts in my Mgr UI, which do not want
to go away. For example (I'm also attaching an image for your convenience):
MTU Mismatch: Node ceph04 has a different MTU size (9000) than the median
value on device storage-int.
The alerts appears to be active, but doesn't reflect the actual situation:
06:00 [root@ceph04 ~]# ip li li | grep -E "ens2f0|ens3f0|8:
bond0|storage-int"
4: ens3f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master
bond0 state UP mode DEFAULT group default qlen 1000
6: ens2f0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master
bond0 state UP mode DEFAULT group default qlen 1000
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue
state UP mode DEFAULT group default qlen 1000
10: storage-int@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc
noqueue state UP mode DEFAULT group default qlen 1000
I have similarly stuck alerts about 'high pg count deviation', which
triggered during the cluster rebalance but somehow never cleared, despite
all operations finished successfully and CLI tools report that the cluster
is healthy. How can I clear these alerts?
I would very much appreciate any advice.
Best regards,
Zakhar
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx