Re: PGs stuck unclean "active+remapped" after an osd marked out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 11/03/2015 05:44, Francois Lafont a écrit :

> PS: here is my conf.
> [...]

I have this too:

~# ceph osd crush show-tunables
{ "choose_local_tries": 0,
  "choose_local_fallback_tries": 0,
  "choose_total_tries": 50,
  "chooseleaf_descend_once": 1,
  "chooseleaf_vary_r": 0,
  "straw_calc_version": 1,
  "profile": "unknown",
  "optimal_tunables": 0,
  "legacy_tunables": 0,
  "require_feature_tunables": 1,
  "require_feature_tunables2": 1,
  "require_feature_tunables3": 0,
  "has_v2_rules": 0,
  "has_v3_rules": 0}

And in the online documentation, I can read this:
http://ceph.com/docs/master/rados/operations/crush-map/#crush-tunables3

    "Legacy default is 0, but with this value CRUSH is sometimes unable to
    find a mapping."

Is this my problem?
Should I do this in my cluster?

    ceph osd crush set-tunable chooseleaf_vary_r 1

But here http://ceph.com/docs/master/rados/operations/crush-map/#which-client-versions-support-crush-tunables3,
I can read: "Linux kernel version v3.15 or later (for the file system and RBD kernel clients)"
and It could be a problem for me because I have clients with kernel version
3.13 (Ubuntu 14.04).

-- 
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux