ceph -s: wrong host count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



	Hello, Ceph users!

I have recently noticed that when I reboot a single ceph node,
ceph -s reports "5 hosts down" instead of one. The following
is captured during reboot of a node with two OSDs:

    health: HEALTH_WARN
            noout flag(s) set
            2 osds down
            5 hosts (2 osds) down
[...]
    mon: 3 daemons, quorum mon1,mon3,mon2 (age 8h)
    mgr: mon2(active, since 2d), standbys: mon3, mon1
    osd: 34 osds: 32 up (since 2m), 34 in (since 4M)
         flags noout
    rgw: 1 daemon active (1 hosts, 1 zones)

After the node successfully reboots, ceph -s reports "HEALTH OK"
and of course no OSDs and no hosts are reported as being down.

Does anybody else see this as well? This is Ceph 18.2.1, but I think
I have seen this on Ceph 17 as well.

Thanks,

-Yenya

-- 
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| https://www.fi.muni.cz/~kas/                        GPG: 4096R/A45477D5 |
    We all agree on the necessity of compromise. We just can't agree on
    when it's necessary to compromise.                     --Larry Wall
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux