Hello,
we removed some nodes from our cluster. This worked without problems.
Now, lots of OSDs do not want to join the cluster anymore if we reboot
one of the still available nodes.
It always runs into timeouts:
--> ceph-volume lvm activate successful for osd ID: XX
monclient(hunting): authenticate timed out after 300
MONs and MGRs are running fine.
Network is working, netcat to the MONs' ports are open.
Setting a higher debug level has no effect even if we add it to the
ceph.conf file.
The PGs are pretty unhappy, e. g.:
7.143 87771 0 0 0 0
314744902235 0 0 10081 10081
down 2023-06-20T09:16:03.546158+0000 961275'1395646
961300:9605547 [209,NONE,NONE] 209 [209,NONE,NONE]
209 961231'1395512 2023-06-19T23:46:40.101791+0000 961231'1395512
2023-06-19T23:46:40.101791+0000
PG query wants us to set an OSD lost however I do not want to do this.
OSDs are blocked by OSDs from the removed nodes:
ceph osd blocked-by
osd num_blocked
152 38
244 41
144 54
...
We added the removed hosts again and tried to start the OSDs on this
node and they also failed into the timeout mentioned above.
This is a containerized cluster running version 16.2.10.
Replication is 3, some pools use an erasure coded profile.
Best regards,
Malte
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx