On 1/30/20 1:55 PM, Gregory Farnum wrote: > On Thu, Jan 30, 2020 at 1:38 PM Wido den Hollander <wido@xxxxxxxx> wrote: >> >> >> >> On 1/30/20 1:34 PM, vishal@xxxxxxxxxxxxxxx wrote: >>> Iam testing failure scenarios for my cluster. I have 3 monitors. Lets say if mons 1 and 2 go down and so monitors can't form a quorum, how can I recover? >>> >>> Are the instructions at followling link valid for deleting mons 1 and 2 from monmap, https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/red_hat_ceph_administration_guide/remove_a_monitor#removing-monitors-from-an-unhealthy-cluster >>> >>> One more question - lets say I delete mons 1 and 2 from monmap. And the cluster has only mon 3 remaining so mon 3 has quorum. Now what happens if mon 1 and 2 come up? Do they join mon 3 and so there will again be 3 monitors in the cluster? >>> >> >> The epoch of the monmap has increased by removing mons 1 and 2 from the >> map. Only mon 3 has this new map with the new epoch. >> >> Therefor, if mon 1 and 2 boot they see the epoch of mon 3 is newer and >> thus won't be able to join. > > If you delete the monitors from the map using ceph commands, so that > they KNOW they've been removed, this is fine. But you don't want to do > that to a cluster using offline tools: if monitor 3 dies before mons 1 > and 2 turn on, they will find each other, not see another peer, and > say "hey, we are 2 of the 3 monitors in the map, let's form a quorum!" Aha, yes! That's a situation I didn't think of. Good addition to my answer. Wido > -Greg > >> >> Wido >> >>> Thanks >>> _______________________________________________ >>> ceph-users mailing list -- ceph-users@xxxxxxx >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx >>> >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx >> > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx