Re: Editing Crush Map to fix osd_crush_chooseleaf_type = 0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the reply! I've pasted what I believe are the applicable parts of the crush map below. I see that the rule id is 0, but what is num-rep?

num-rep is the number of replicas you want to test, so basically the size paramater of the pool this rule applies to. Do you have any hierachy in your osd tree? If there's no other type like datacenter, rack, chassis etc. then changing 'choose firstn' to 'chooseleaf firstn' should be enough, I think.


Zitat von Matt Dunavant <mdunavant@xxxxxxxxxxxxxxxxxx>:

Thanks for the reply! I've pasted what I believe are the applicable parts of the crush map below. I see that the rule id is 0, but what is num-rep?

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# rules
rule replicated_rule {
	id 0
	type replicated
	min_size 1
	max_size 10
	step take default
	step choose firstn 0 type osd
	step emit
}

# end crush map
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux