On Thu, May 31, 2018 at 4:16 AM, Linh Vu <vul@xxxxxxxxxxxxxx> wrote: > Hi all, > > > On my test Luminous 12.2.4 cluster, with this set (initially so I could use > upmap in the mgr balancer module): > > > # ceph osd set-require-min-compat-client luminous > > # ceph osd dump | grep client > require_min_compat_client luminous > min_compat_client jewel > > Not quite sure why min_compat_client is still jewel. > > > I have created cephfs on the cluster, and use a mix of fuse and kernel > clients to test it. The fuse clients are on ceph-fuse 12.2.5 and show up as > luminous clients. > > > The kernel client (just one mount) either on kernel 4.15.13 or 4.16.13 (the > latest, just out) is showing up as jewel, seen in `ceph features`: > > "client": { > "group": { > "features": "0x7010fb86aa42ada", > "release": "jewel", > "num": 1 > }, > "group": { > "features": "0x1ffddff8eea4fffb", > "release": "luminous", > "num": 8 > } > } > > I thought I read somewhere here that kernel 4.13+ should have full support > for Luminous, so I don't know why this is showing up as jewel. I'm also > surprised that it could mount and write to my cephfs share just fine despite > that. It also doesn't seem to matter when I run ceph balancer with upmap > mode despite this client being connected and writing files. > > > I can't see anything in mount.ceph options to specify jewel vs luminous > either. > > > Is this just a mislabel i.e my kernel client is actually fully Luminous > supported but showing up as Jewel? Or is the kernel client a bit behind > still? All luminous features, including upmap, are supported in 4.13+. This is just a reporting issue caused by the fact that MSG_ADDR2 (which came before luminous and isn't a required feature) isn't implemented in the kernel client yet. > > > Currently we have a mix of ceph-fuse 12.2.5 and kernel client 4.15.13 in our > production cluster, and I'm looking to set `ceph osd > set-require-min-compat-client luminous` so I can use ceph balancer with > upmap mode. You will need to append --yes-i-really-mean-it as a work around. Thanks, Ilya _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com