Try to work with the tunables:
$ ceph osd crush show-tunables
{
"choose_local_tries": 0,
"choose_local_fallback_tries": 0,
"choose_total_tries": 50,
"chooseleaf_descend_once": 1,
"chooseleaf_vary_r": 1,
"chooseleaf_stable": 0,
"straw_calc_version": 1,
"allowed_bucket_algs": 54,
"profile": "hammer",
"optimal_tunables": 0,
"legacy_tunables": 0,
"minimum_required_version": "firefly",
"require_feature_tunables": 1,
"require_feature_tunables2": 1,
"has_v2_rules": 0,
"require_feature_tunables3": 1,
"has_v3_rules": 0,
"has_v4_buckets": 0,
"require_feature_tunables5": 0,
"has_v5_rules": 0
}
try to 'disable' the 'require_feature_tunables5', with that I think you should be ok, maybe there's another way, but that works for me. One way to change it, is to comment out in the crushmap the option "tunable chooseleaf_stable 1" and inject the crushmap again in the cluster (of course that would produce on a lot of data moving on the pgs)
German
2017-09-27 9:08 GMT-03:00 Yoann Moulin <yoann.moulin@xxxxxxx>:
Hello,
I try to mount a cephfs filesystem from fresh luminous cluster.
With the latest kernel 4.13.3, it works
> $ sudo mount.ceph iccluster041.iccluster,iccluster042.iccluster, iccluster054.iccluster:/ /mnt -v -o name=container001,secretfile=/ tmp/secret
> parsing options: name=container001,secretfile=/tmp/secret
> $ df -h /mnt
> Filesystem Size Used Avail Use% Mounted on
> 10.90.38.17,10.90.38.18,10.90.39.5:/ 66T 19G 66T 1% /mnt
> root@iccluster054:~# ceph auth get client.container001
> exported keyring for client.container001
> [client.container001]
> key = <snip>
> caps mds = "allow rw"
> caps mon = "allow r"
> caps osd = "allow rw pool=cephfs_data"
> root@iccluster05:~#:/var/log# ceph --cluster container fs authorize cephfs client.container001 / rw
> [client.container001]
> key = <snip>
With the latest Ubuntu 16.04 LTS Kernel and ceph-common 12.2.0, I'm not able to mount it
> Linux iccluster013 4.4.0-96-generic #119~14.04.1-Ubuntu SMP Wed Sep 13 08:40:48 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
> ii ceph-common 12.2.0-1trusty amd64 common utilities to mount and interact with a ceph storage cluster
> root@iccluster013:~# mount.ceph iccluster041,iccluster042,iccluster054:/ /mnt -v -o name=container001,secretfile=/ tmp/secret
> parsing options: name=container001,secretfile=/tmp/secret
> mount error 110 = Connection timed out
here the dmesg :
> [ 417.528621] Key type ceph registered
> [ 417.528996] libceph: loaded (mon/osd proto 15/24)
> [ 417.540534] FS-Cache: Netfs 'ceph' registered for caching
> [ 417.540546] ceph: loaded (mds proto 32)
> [...]
> [ 2596.609885] libceph: mon1 10.90.38.18:6789 feature set mismatch, my 107b84a842aca < server's 40107b84a842aca, missing 400000000000000
> [ 2596.626797] libceph: mon1 10.90.38.18:6789 missing required protocol features
> [ 2606.960704] libceph: mon0 10.90.38.17:6789 feature set mismatch, my 107b84a842aca < server's 40107b84a842aca, missing 400000000000000
> [ 2606.977621] libceph: mon0 10.90.38.17:6789 missing required protocol features
> [ 2616.944998] libceph: mon0 10.90.38.17:6789 feature set mismatch, my 107b84a842aca < server's 40107b84a842aca, missing 400000000000000
> [ 2616.961917] libceph: mon0 10.90.38.17:6789 missing required protocol features
> [ 2626.961329] libceph: mon0 10.90.38.17:6789 feature set mismatch, my 107b84a842aca < server's 40107b84a842aca, missing 400000000000000
> [ 2626.978290] libceph: mon0 10.90.38.17:6789 missing required protocol features
> [ 2636.945765] libceph: mon0 10.90.38.17:6789 feature set mismatch, my 107b84a842aca < server's 40107b84a842aca, missing 400000000000000
> [ 2636.962677] libceph: mon0 10.90.38.17:6789 missing required protocol features
> [ 2646.962255] libceph: mon1 10.90.38.18:6789 feature set mismatch, my 107b84a842aca < server's 40107b84a842aca, missing 400000000000000
> [ 2646.979228] libceph: mon1 10.90.38.18:6789 missing required protocol features
Is there specific option to set on the cephfs to be able to mount it on a kernel 4.4 ?
Best regards,
--
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com