May not be directly related to your error, but they slap a DO NOT UPGRADE FROM AN OLDER VERSION label on the Pacific release notes for a reason... https://docs.ceph.com/en/latest/releases/pacific/ It means please don't upgrade right now. On Wed, Dec 15, 2021 at 3:07 PM Michael Uleysky <uleysky@xxxxxxxxx> wrote: > I try to upgrade three-node nautilus cluster to pacific. I am updating ceph > on one node and restarting daemons. OSD ok, but monitor cannot enter > quorum. > With debug_mon 20/20 I see repeating blocks in the logs of problem monitor > like > > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > bootstrap > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > sync_reset_requester > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > unregister_cluster_logger - not registered > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > cancel_probe_timeout 0x557603d82420 > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > monmap e4: 3 mons at {debian1=[v2: > 172.16.21.101:3300/0,v1:172.16.21.101:6789/0],debian2=[v2: > 172.16.21.102:3300/0,v1:172.16.21.102:6789/0],debian3=[v2: > 172.16.21.103:3300/0,v1:172.16.21.103:6789/0]} > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > _reset > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing).auth > v0 > _set_mon_num_rank num 0 rank 0 > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > cancel_probe_timeout (none scheduled) > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > timecheck_finish > 2021-12-15T13:34:57.075+1000 7f6e1b417700 15 mon.debian2@1(probing) e4 > health_tick_stop > 2021-12-15T13:34:57.075+1000 7f6e1b417700 15 mon.debian2@1(probing) e4 > health_interval_stop > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > scrub_event_cancel > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > scrub_reset > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > cancel_probe_timeout (none scheduled) > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > reset_probe_timeout 0x557603d82420 after 2 seconds > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > probing other monitors > 2021-12-15T13:34:57.075+1000 7f6e1b417700 20 mon.debian2@1(probing) e4 > _ms_dispatch existing session 0x557603d60b40 for mon.2 > 2021-12-15T13:34:57.075+1000 7f6e1b417700 20 mon.debian2@1(probing) e4 > entity_name global_id 0 (none) caps allow * > 2021-12-15T13:34:57.075+1000 7f6e1b417700 20 is_capable service=mon > command= read addr v2:172.16.21.103:3300/0 on cap allow * > 2021-12-15T13:34:57.075+1000 7f6e1b417700 20 allow so far , doing grant > allow * > 2021-12-15T13:34:57.075+1000 7f6e1b417700 20 allow all > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > handle_probe mon_probe(reply 8deaaacb-c581-4c10-b58c-0ab261aa2865 name > debian3 quorum 0,2 leader 0 paxos( fc 52724559 lc 52725302 ) mon_release > octopus) v7 > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > handle_probe_reply mon.2 v2:172.16.21.103:3300/0 mon_probe(reply > 8deaaacb-c581-4c10-b58c-0ab261aa2865 name debian3 quorum 0,2 leader 0 > paxos( fc 52724559 lc 52725302 ) mon_release octopus) v7 > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > monmap is e4: 3 mons at {debian1=[v2: > 172.16.21.101:3300/0,v1:172.16.21.101:6789/0],debian2=[v2: > 172.16.21.102:3300/0,v1:172.16.21.102:6789/0],debian3=[v2: > 172.16.21.103:3300/0,v1:172.16.21.103:6789/0]} > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > got > newer/committed monmap epoch 4, mine was 4 > 2021-12-15T13:34:57.075+1000 7f6e1b417700 10 mon.debian2@1(probing) e4 > bootstrap > > On the nautilus monitor I see > > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 mon.debian1@0(leader) e4 > _ms_dispatch existing session 0x55feee4f9b00 for mon.1 > > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 mon.debian1@0(leader) e4 > entity_name global_id 0 (none) caps allow * > > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 is_capable service=mon > command= read addr v2:172.16.21.102:3300/0 on cap allow * > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 allow so far , doing grant > allow * > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 allow all > 2021-12-15T13:57:03.866+1000 7f109cf23700 10 mon.debian1@0(leader) e4 > handle_probe mon_probe(probe 8deaaacb-c581-4c10-b58c-0ab261aa2865 name > debian2 new mon_release unknown) v8 > 2021-12-15T13:57:03.866+1000 7f109cf23700 10 mon.debian1@0(leader) e4 > handle_probe_probe mon.1 v2:172.16.21.102:3300/0mon_probe(probe > 8deaaacb-c581-4c10-b58c-0ab261aa2865 name debian2 new mon_release unknown) > v8 features 4540138292840890367 > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 mon.debian1@0(leader) e4 > _ms_dispatch existing session 0x55feee4f9b00 for mon.1 > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 mon.debian1@0(leader) e4 > entity_name global_id 0 (none) caps allow * > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 is_capable service=mon > command= read addr v2:172.16.21.102:3300/0 on cap allow * > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 allow so far , doing grant > allow * > 2021-12-15T13:57:03.866+1000 7f109cf23700 20 allow all > 2021-12-15T13:57:03.866+1000 7f109cf23700 10 mon.debian1@0(leader) e4 > handle_probe mon_probe(probe 8deaaacb-c581-4c10-b58c-0ab261aa2865 name > debian2 new mon_release unknown) v8 > 2021-12-15T13:57:03.866+1000 7f109cf23700 10 mon.debian1@0(leader) e4 > handle_probe_probe mon.1 v2:172.16.21.102:3300/0mon_probe(probe > 8deaaacb-c581-4c10-b58c-0ab261aa2865 name debian2 new mon_release unknown) > v8 features 4540138292840890367 > > Pacific version 16.2.6 (test 16.2.7 with the same result), nautilus version > 15.2.15. > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx