What do you have for the new global_id settings? Maybe set it to allow insecure global_id auth and see if that allows the mon to join? > I can try to move the /var/lib/ceph/mon/ dir and recreate it!? I'm not sure it will help. Running the mon with --debug_ms=1 might give clues why it's stuck probing. .. Dan On Sun, 25 Jul 2021, 17:53 Ansgar Jazdzewski, <a.jazdzewski@xxxxxxxxxxxxxx> wrote: > Am So., 25. Juli 2021 um 17:17 Uhr schrieb Dan van der Ster > <dan@xxxxxxxxxxxxxx>: > > > > > raise the min version to nautilus > > > > Are you referring to the min osd version or the min client version? > > yes sorry was not written clearly > > > I don't think the latter will help. > > > > Are you sure that mon.osd01 can reach those other mons on ports 6789 and > 3300? > > yes I just tested it one more time ping MTU and telnet to all mon ports > > > Do you have any notable custom ceph configurations on this cluster? > > No, I did not think anything fancy > > [global] > cluster network = 10.152.40.0/22 > fsid = a6baa789-6be2-4ce0-ab2d-7c78b899d4bd > mon host = 10.152.28.171,10.152.28.172,10.152.28.173 > mon initial members = osd01,osd02,osd03 > osd pool default crush rule = -1 > public network = 10.152.28.0/22 > > > I just tried to start the mon with force-sync but as the mon did not > join it will not pull any data > ceph-mon -f --cluster ceph --id osd01 --setuser ceph --setgroup ceph > --debug_mon 10 --yes-i-really-mean-it --force-sync -d > > I can try to move the /var/lib/ceph/mon/ dir and recreate it!? > > > thanks for all the help so far! > Ansgar > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx