Re: mon woes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wolfgang Hennerbichler writes:
> I have a ceph cluster on 2 sites. One site has 2 mons, the other site
> has 1 mon. [...]

As Martin wrote, if you lose the site with the 2 mons, the entire
cluster will become unavailable.

Here's what I've been thinking to myself could be a nice solution:

Get a third site somewhere, and move one of the currently 2 mons from
your first site to that third site.  The site only needs space and
performance for one (1) VM running ceph-mon.  Ideally it would be
reliable, well-connected in terms of RTT etc. - but even if it isn't,
that may not matter so much.

My reasoning is that under normal conditions, the two mons in your
"real" sites will be sufficient (quorum) to maintain consistency of the
cluster.  So even if the third-site mon is somehow "asleep at the
wheel", that wouldn't necessarily have any noticeable impact on your
cluster's performance.  (That's pure hypothesis, I haven't tried this or
otherwise thought this through.  Please comment if you disagree!)

Once you lose a site, well, then you still have quorum with your third,
mon-only, site.  In this case you'd start noticing if the third site is
slow or not-so-well-connected, but hey, at least you still have service!

I guess you can hardwire the ranks of the mons to make sure that the
third-site monitor never becomes elected as leader.

This scheme can be extended to higher numbers, i.e. if you have three
"real" Ceph sites, you can add two external mon-only sites to survive
loss of any two datacenters, etc.

My background is an ISP one.  We have few "real" datacenter locations,
but many sites where we could deploy an additional VM or two.  Also,
connectivity from these non-datacenter sites is still pretty good, and
the backbone has been engineered to avoid fate-sharing between
locations.  So that's why I like this concept.  But as I said, I haven't
tried this yet - for now, all our Ceph nodes are in a single rack.

Any opinions on this?
-- 
Simon.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux