On Tue, Mar 2, 2021 at 9:26 AM Stefan Kooman <stefan@xxxxxx> wrote: > > Hi, > > On a CentOS 7 VM with mainline kernel (5.11.2-1.el7.elrepo.x86_64 #1 SMP > Fri Feb 26 11:54:18 EST 2021 x86_64 x86_64 x86_64 GNU/Linux) and with > Ceph Octopus 15.2.9 packages installed. The MDS server is running > Nautilus 14.2.16. Messenger v2 has been enabled. Poort 3300 of the > monitors is reachable from the client. At mount time we get the following: > > > Mar 2 09:01:14 kernel: Key type ceph registered > > Mar 2 09:01:14 kernel: libceph: loaded (mon/osd proto 15/24) > > Mar 2 09:01:14 kernel: FS-Cache: Netfs 'ceph' registered for caching > > Mar 2 09:01:14 kernel: ceph: loaded (mds proto 32) > > Mar 2 09:01:14 kernel: libceph: mon4 (1)[mond addr]:6789 session established > > Mar 2 09:01:14 kernel: libceph: another match of type 1 in addrvec > > Mar 2 09:01:14 kernel: ceph: corrupt mdsmap > > Mar 2 09:01:14 kernel: ceph: error decoding mdsmap -22 > > Mar 2 09:01:14 kernel: libceph: another match of type 1 in addrvec > > Mar 2 09:01:14 kernel: libceph: corrupt full osdmap (-22) epoch 98764 off 6357 (0000000027a57a75 of 00000000d3075952-00000000e307797f) > > Mar 2 09:02:15 kernel: ceph: No mds server is up or the cluster is laggy > > The /etc/ceph/ceph.conf has been adjusted to reflect the messenger v2 > changes. ms_bind_ipv6=trie, ms_bind_ipv4=false. The kernel client still > seems to be use the v1 port though (although since 5.11 v2 should be > supported). > > Has anyone seen this before? Just guessing here, but could it that the > client tries to speak v2 protocol on v1 port? Hi Stefan, Those "another match of type 1" errors suggest that you have two different v1 addresses for some of or all OSDs and MDSes in osdmap and mdsmap respectively. What is the output of "ceph osd dump" and "ceph fs dump"? Thanks, Ilya _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx