----- Le 17 Juil 24, à 15:53, Albert Shih Albert.Shih@xxxxxxxx a écrit : > Le 17/07/2024 à 09:40:59+0200, David C. a écrit > Hi everyone. > >> >> The curiosity of Albert's cluster is that (msgr) v1 and v2 are present on the >> mons, as well as on the osds backend. >> >> But v2 is absent on the public OSD and MDS network >> >> The specific point is that the public network has been changed. >> >> At first, I thought it was the order of declaration of my_host (v1 before v2) >> but apparently, that's not it. >> >> >> Le mer. 17 juil. 2024 à 09:21, Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx> a >> écrit : >> >> Hi David, >> >> Redeploying 2 out of 3 MONs a few weeks back (to have them using RocksDB to >> be ready for Quincy) prevented some clients from connecting to the cluster >> and mounting cephfs volumes. >> >> Before the redeploy, these clients were using port 6789 (v1) explicitly as >> connections wouldn't work with port 3300 (v2). >> After the redeploy, removing port 6789 from mon_ips fixed the situation. >> >> Seems like msgr v2 activation did only occur after all 3 MONs were >> redeployed and used RocksDB. Not sure why this happened though. >> >> @Albert, if this cluster has been upgrade several times, you might want to >> check /var/lib/ceph/$(ceph fsid)/kv_backend, redeploy the MONS if leveldb, >> make sure all clients use the new mon_host syntax in ceph.conf ([v2: >> <cthulhu1_ip>:3300,v1:<cthulhu1_ip>:6789],etc.]) and check their ability to >> connect to port 3300. > > So it's working now, I can mount from all my clients the cephfs. > > Because I'm not sure what really happens and where was the issue here what > have been done on the cluster (in that timeline) : > > When I change the IP address of the server I make a maybe mistake and put > > ceph mon set-addrs cthulhu1 > [v1:cthulhu1_new_ip:6789/0,v2:cthulhu1_new_ip:3300/0] > > Yesterday David change it to the right way > > ceph mon set-addrs cthulhu1 > [v2:cthulhu1_new_ip:3300/0,v1:cthulhu1_new_ip:6789/0] > > but it's was not enough. Even after restarting all the OSD. > > Then have been try some redeploying the mds --> no joy. > > This morning I restart a osd and notice the restarted osd listen on v2 > and v1, so I restart all osd. > > After that, every osd listen on v2 and v1. > > But still unable to mount the cephfs. > > I try the option ms_mode=prefer-crc but nothing. > > So I end up by rebooting the all cluster and now everything work fine. > > Thanks for your help. Great! Glad you figured it out. Frédéric. > > Regards > -- > Albert SHIH 🦫 🐸 > Observatoire de Paris > France > Heure locale/Local time: > mer. 17 juil. 2024 15:42:00 CEST _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx