In 'rule replicated_ruleset':
..
...
step chooseleaf firstn 0 type osd
...
...
2016-06-22 19:38 GMT+08:00 min fang <louisfang2013@xxxxxxxxx>:
Thanks, actually I create a pool with more pgs also meet this problem. Following is my crush map, please help point how to change the crush ruleset? thanks.
#begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1
# devices
device 0 device0
device 1 device1
device 2 osd.2
device 3 osd.3
device 4 osd.4
# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
# buckets
host redpower-ceph-01 {
id -2 # do not change unnecessarily
# weight 3.000
alg straw
hash 0 # rjenkins1
item osd.2 weight 1.000
item osd.3 weight 1.000
item osd.4 weight 1.000
}
root default {
id -1 # do not change unnecessarily
# weight 3.000
alg straw
hash 0 # rjenkins1
item redpower-ceph-01 weight 3.000
}
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
# end crush map2016-06-22 18:27 GMT+08:00 Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>:Hi,
On 06/22/2016 12:10 PM, min fang wrote:
Hi, I created a new ceph cluster, and create a pool, but see "stuck unclean since forever" errors happen(as the following), can help point out the possible reasons for this? thanks.
ceph -s
cluster 602176c1-4937-45fc-a246-cc16f1066f65
health HEALTH_WARN
8 pgs degraded
8 pgs stuck unclean
8 pgs undersized
too few PGs per OSD (2 < min 30)
monmap e1: 1 mons at {ceph-01=172.0.0.11:6789/0}
election epoch 14, quorum 0 ceph-01
osdmap e89: 3 osds: 3 up, 3 in
flags
pgmap v310: 8 pgs, 1 pools, 0 bytes data, 0 objects
60112 MB used, 5527 GB / 5586 GB avail
8 active+undersized+degraded
*snipsnap*
With three OSDs and a single host you need to change the crush ruleset for the pool, since it tries to distribute the data across 3 different _host_ by default.
Regards,
Burkhard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Best regards, 施柏安 Desmond Shih 技術研發部 Technical Development | ||
迎棧科技股份有限公司 | ||
│ | 886-975-857-982 | |
│ | desmond.s@inwinstack.com | |
│ | 886-2-7738-2858 #7725 | |
│ | 新北市220板橋區遠東路3號5樓C室 | |
Rm.C, 5F., No.3, Yuandong Rd., | ||
Banqiao Dist., New Taipei City 220, Taiwan (R.O.C) | ||
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com