Hi,
you probably have empty OSD nodes in your crush tree. Can you send the
output of 'ceph osd tree'?
Thanks,
Eugen
Zitat von Jan Kasprzak <kas@xxxxxxxxxx>:
Hello, Ceph users!
I have recently noticed that when I reboot a single ceph node,
ceph -s reports "5 hosts down" instead of one. The following
is captured during reboot of a node with two OSDs:
health: HEALTH_WARN
noout flag(s) set
2 osds down
5 hosts (2 osds) down
[...]
mon: 3 daemons, quorum mon1,mon3,mon2 (age 8h)
mgr: mon2(active, since 2d), standbys: mon3, mon1
osd: 34 osds: 32 up (since 2m), 34 in (since 4M)
flags noout
rgw: 1 daemon active (1 hosts, 1 zones)
After the node successfully reboots, ceph -s reports "HEALTH OK"
and of course no OSDs and no hosts are reported as being down.
Does anybody else see this as well? This is Ceph 18.2.1, but I think
I have seen this on Ceph 17 as well.
Thanks,
-Yenya
--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| https://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 |
We all agree on the necessity of compromise. We just can't agree on
when it's necessary to compromise. --Larry Wall
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx