Re: 6 osds on 2 hosts, does Ceph always write data in one osd on host1 and replica in osd on host2?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





2017-06-26 11:15 GMT+02:00 Ashley Merrick <ashley@xxxxxxxxxxxxxx>:
Will need to see a full export of your crush map rules.

This is my crush map rules:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host ceph-storage-rbx-1 {
id -2 # do not change unnecessarily
# weight 10.852
alg straw
hash 0 # rjenkins1
item osd.0 weight 3.617
item osd.2 weight 3.617
item osd.4 weight 3.617
}
host ceph-storage-rbx-2 {
id -3 # do not change unnecessarily
# weight 10.852
alg straw
hash 0 # rjenkins1
item osd.1 weight 3.617
item osd.3 weight 3.617
item osd.5 weight 3.617
}
root default {
id -1 # do not change unnecessarily
# weight 21.704
alg straw
hash 0 # rjenkins1
item ceph-storage-rbx-1 weight 10.852
item ceph-storage-rbx-2 weight 10.852
}

# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

# end crush map
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux