Kubernetes Luminous client acting on Nautilus pool: protocol feature mismatch: missing 200000 (CEPH_FEATURE_MON_GV ?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hallo all!

I am configuring a new storage class on my Kubernetes cluster, pointing to a pool on a Ceph cluster which was recently upgraded to Nautilus (was Luminous). The old storage class points to a Luminous pool in a separate cluster and works fine. On the new one, I think I did the configuration properly, yet when creating a volume I get this:

2020-10-07 10:00:53.849128 7f2f8c6f9700 0 -- 10.2.3.13:0/3520982056 >> 10.2.3.23:6789/0 pipe(0x7f2f780008c0 sd=3 :60192 s=1 pgs=0 cs=0 l=1 c=0x7f2f780068e0).connect protocol feature mismatch, my 27ffffffefdfbfff < peer 27fddff8efacbfff missing 200000

Looking at https://ceph.io/geen-categorie/feature-set-mismatch-error-on-ceph-kernel-client/ looks like the 200000 corresponds to CEPH_FEATURE_MON_GV, which by the way is listed here
	https://github.com/ceph/ceph/pull/8214
 as a feature which could/should be removed.

Things being as I described, I guess it would be safe to change the value of the tunable, correct? Unfortunately, I was unable to find any way to achieve this... the trivial "ceph osd crush set-tunable mon_gv 0" does not work.

Any idea, please, how to fix my error?
Upgrading Ceph packages on Kubernetes workers (now at Luminous) would help, may be?

  Thanks!

				Fulvio

--
Fulvio Galeazzi
GARR-CSD Department
skype: fgaleazzi70
tel.: +39-334-6533-250

Attachment: smime.p7s
Description: Firma crittografica S/MIME

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux