I originally built a single node cluster, and added 'osd_crush_chooseleaf_type = 0 #0 is for one node cluster' to ceph.conf (which is now commented out).
I've now added a 2nd node, where can I set this value to 1? I see in the crush map that the osd's are under 'host' buckets and don't see any reference to leaf.
Would the cluster automatically rebalance when the 2nd host was added? How can I verify this?
The issue right now, is with two host, copies = 2, min copies = 1, I cannot access data from client machines when one of the two hosts goes down.
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
# buckets
host ceph1 {
id -2 # do not change unnecessarily
# weight 163.350
alg straw
hash 0 # rjenkins1
item osd.0 weight 3.630
item osd.1 weight 3.630
}
host ceph2 {
id -3 # do not change unnecessarily
# weight 163.350
alg straw
hash 0 # rjenkins1
item osd.2 weight 3.630
item osd.3 weight 3.630
}
root default {
id -1 # do not change unnecessarily
# weight 326.699
alg straw
hash 0 # rjenkins1
item ceph1 weight 163.350
item ceph2 weight 163.350
}
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 0 type osd <-- should this line be ''step chooseleaf firstn 0 type host"?
step emit
}
# end crush map
I've now added a 2nd node, where can I set this value to 1? I see in the crush map that the osd's are under 'host' buckets and don't see any reference to leaf.
Would the cluster automatically rebalance when the 2nd host was added? How can I verify this?
The issue right now, is with two host, copies = 2, min copies = 1, I cannot access data from client machines when one of the two hosts goes down.
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
# buckets
host ceph1 {
id -2 # do not change unnecessarily
# weight 163.350
alg straw
hash 0 # rjenkins1
item osd.0 weight 3.630
item osd.1 weight 3.630
}
host ceph2 {
id -3 # do not change unnecessarily
# weight 163.350
alg straw
hash 0 # rjenkins1
item osd.2 weight 3.630
item osd.3 weight 3.630
}
root default {
id -1 # do not change unnecessarily
# weight 326.699
alg straw
hash 0 # rjenkins1
item ceph1 weight 163.350
item ceph2 weight 163.350
}
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 0 type osd <-- should this line be ''step chooseleaf firstn 0 type host"?
step emit
}
# end crush map
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com