Re: redundancy with 2 nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/01/15 23:16, Christian Balzer wrote:

Hello,

On Thu, 01 Jan 2015 18:25:47 +1300 Mark Kirkwood wrote:
but I agree that you should probably not get a HEALTH OK status when you
have just setup 2 (or in fact any even number of) monitors...HEALTH WARN
would make more sense, with a wee message suggesting adding at least one
more!


I think what Jiri meant is that wen the whole cluster goes into a deadlock
due to loosing monitor quorum, ceph -s etc won't work anymore either.


Right - but looking at health output from his earlier post:

cephadmin@ceph1:~$ ceph status
    cluster bce2ff4d-e03b-4b75-9b17-8a48ee4d7788
     health HEALTH_OK
monmap e1: 2 mons at {ceph1=192.168.30.21:6789/0,ceph2=192.168.30.22:6789/0}, election epoch 12, quorum 0,1 ceph1,ceph2
     mdsmap e7: 1/1/1 up {0=ceph1=up:active}, 1 up:standby
     osdmap e88: 4 osds: 4 up, 4 in
      pgmap v2051: 1280 pgs, 5 pools, 13184 MB data, 3328 objects
            26457 MB used, 11128 GB / 11158 GB avail
                1280 active+clean

...if he had received some sort of caution about the number of mons instead of HEALTH OK from that health status, then he might have added another *before* everything locked up. That's what I was meaning before.

And while the cluster rightfully shouldn't be doing anything in such a
state, querying the surviving/reachable monitor and being told as much
would indeed be a nice feature, as opposed to deafening silence.


Sure, getting nothing is highly undesirable.

As for your suggestion, while certainly helpful it is my not so humble
opinion than the the WARN state right now is totally overloaded and quite
frankly bogus.
This is particularly a problem with monitor plugins that just pick up the
WARN state without further discrimination.



Yeah, I agree that WARN is hopelessly overloaded. In the past I have to dig backward in the logs to see what the warning is actually about, and if it is really something that needs attention.

regards

Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux