Re: understanding orchestration and cephadm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Gary,

It looks like everything you did is fine.  I think the "problem" is
that cephadm has/had some logic that tried to leave users with an odd
number of monitors.  I'm pretty sure this is why two of them were
removed.

This code has been removed in pacific, and should probably be
backported to octopus.

There is nothing wrong with an even number of mons.  The only number
you might want to avoid is 2 because a failure of either monitor will
cause the cluster to lose quorum and become unavailable (quorum
requires > N/2, which in a 2-mon case means both mons).  As far as
availability goes that is probably not ideal, but as far as durability
goes, it's extremely useful to have a duplicate copy of the mon data
so that losing a single disk doesn't destroy the cluster metadata (and
require a complicated recovery process).

In any case, generally speaking, nobody should worry about having an
even number of monitors.  Focus instead of getting >2 so you can
tolerate at least one mon failure and keep the cluster running.


On Wed, Mar 31, 2021 at 10:14 AM Gary Molenkamp <molenkam@xxxxxx> wrote:
> A nautilus cluster with two mons (I know this is not correct for
> quorum), a mgr, and a handful of osds.  I went though the adoption

Any number of monitors is correct.  Less than 3 is not recommended.

sage
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux