Hello, > Note that with 6 monitors, quorum requires 4. > > So if only 3 are running, the system cannot work. > > With one old removed there would be 5 possible, then with quorum of 3. Good point! I hadn't thought of that. Looks like it works if I remove one, thanks a lot! Best, Yoann > On Tue, 10 Mar 2020, Paul Emmerich wrote: > >> On Tue, Mar 10, 2020 at 8:18 AM Yoann Moulin <yoann.moulin@xxxxxxx> wrote: >>> I have added 3 new monitors on 3 VMs and I'd like to stop the 3 old monitors daemon. But I soon as I stop the 3rd old monitor, the cluster stuck >>> because the election of a new monitor fails. >> >> By "stop" you mean "stop and then immediately remove before stopping >> the next one"? Otherwise that's the problem. >> >> -- >> Paul Emmerich >> >> Looking for help with your Ceph cluster? Contact us at https://croit.io >> >> croit GmbH >> Freseniusstr. 31h >> 81247 München >> www.croit.io >> Tel: +49 89 1896585 90 >> >>> >>> The 3 old monitors are in 14.2.4-1xenial >>> The 3 new monitors are in 14.2.7-1bionic >>> >>>> 2020-03-09 16:06:00.167 7fc4a3138700 1 mon.icvm0017@3(peon).paxos(paxos active c 20918592..20919120) lease_timeout -- calling new election >>>> 2020-03-09 16:06:02.143 7fc49f931700 1 mon.icvm0017@3(probing) e4 handle_auth_request failed to assign global_id >>> >>> Did I miss something? >>> >>> In attachment : some logs and ceph.conf >>> >>> Thanks for your help. >>> >>> Best, >>> -- Yoann Moulin EPFL IC-IT _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx