Hello, On a Nautilus cluster, I'd like to move monitors from bare metal servers to VMs to prepare a migration. I have added 3 new monitors on 3 VMs and I'd like to stop the 3 old monitors daemon. But I soon as I stop the 3rd old monitor, the cluster stuck because the election of a new monitor fails. The 3 old monitors are in 14.2.4-1xenial The 3 new monitors are in 14.2.7-1bionic > 2020-03-09 16:06:00.167 7fc4a3138700 1 mon.icvm0017@3(peon).paxos(paxos active c 20918592..20919120) lease_timeout -- calling new election > 2020-03-09 16:06:02.143 7fc49f931700 1 mon.icvm0017@3(probing) e4 handle_auth_request failed to assign global_id Did I miss something? In attachment : some logs and ceph.conf Thanks for your help. Best, -- Yoann Moulin EPFL IC-IT
# Please do not change this file directly since it is managed by Ansible and will be overwritten [global] cluster network = 192.168.47.0/24 fsid = 778234df-5784-4021-b983-0ee1814891be mon host = [v2:10.90.36.16:3300,v1:10.90.36.16:6789],[v2:10.90.36.17:3300,v1:10.90.36.17:6789],[v2:10.90.36.18:3300,v1:10.90.36.18:6789],[v2:10.95.32.45:3300,v1:10.95.32.45:6789],[v2:10.95.32.46:3300,v1:10.95.32.46:6789],[v2:10.95.32.48:3300,v1:10.95.32.48:6789] mon initial members = icadmin006,icadmin007,icadmin008,icvm0017,icvm0018,icvm0022 osd pool default crush rule = -1 osd_crush_chooseleaf_type = 1 osd_op_queue_cut_off = high osd_pool_default_pg_num = 8 osd_pool_default_pgp_num = 8 public network = 10.90.36.0/24,10.90.47.0/24,10.95.32.0/20
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx