HEALTH_OK when one server crashed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

One of our ceph servers froze this morning (no idea why, alas). Ceph
noticed, moved things around, and when I ran ceph -s, said:

root@sto-1-1:~# ceph -s
    cluster 049fc780-8998-45a8-be12-d3b8b6f30e69
     health HEALTH_OK
     monmap e2: 3 mons at
{sto-1-1=172.27.6.11:6789/0,sto-2-1=172.27.6.14:6789/0,sto-3-1=172.27.6.17:6789/0}
            election epoch 250, quorum 0,1,2 sto-1-1,sto-2-1,sto-3-1
     osdmap e9899: 540 osds: 480 up, 480 in
            flags sortbitwise
      pgmap v4549229: 20480 pgs, 25 pools, 7559 GB data, 1906 kobjects
            22920 GB used, 2596 TB / 2618 TB avail
               20480 active+clean
  client io 5416 kB/s rd, 6598 kB/s wr, 44 op/s rd, 53 op/s wr

Is it intentional that it says HEALTH_OK when an entire server's worth
of OSDs are dead? you have to look quite hard at the output to notice
that 60 OSDs are unaccounted for.

Regards,

Matthew


-- 
 The Wellcome Trust Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux