Dear fellow Ceph users,
I run a Ceph cluster providing CephFS to a medium-sized Linux server
farm. Originally we used the kernel driver (which on the distro we use
shows itself to the cluster as a Luminous client) to mount the file
system, however at the time of the upgrade to Squid we became aware of
the data-corruption bug associated with the use of root squash and
subsequently switched to (the Squid version of) ceph-fuse. Furthermore,
I took advantage of the switch to bump the OSD require-min-compat-client
to Reef and subsequently switch the balancer mode to upmap-read.
Weeks passed and I became aware that despite all the attempted tuning,
certain work loads which performed fine using the kernel driver perform
_extremely_ poorly using ceph-fuse. A decision has therefore been made
to relocate the servers which absolutely must have root squash to a
different network-storage solution, switch root squash off for all
CephFS clients and revert to the kernel driver.
Unfortunately while removing client_mds_auth_caps from CephFS
required_client_features and switching the balancer mode back to upmap
went without any problems, "ceph osd set-require-min-compat-client
luminous" fails with
Error EPERM: osdmap current utilizes features that require reef; cannot
set require_min_compat_client below that to luminous
Is there a way of making the osdmap Luminous-compatible again without
losing any data stored on the cluster ()?
Thank you in advance for your help!
--
MS
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx