Question regarding our CRUSHMAP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
We have an environment where we want to make fault domain on chassis level. 2 OSD hosts will be connected to each chassis.
Based on that we have written the attached ruleset. I haven't changed the root , Is this fine ?
Also, if I have say two pools and I want to assign one pool ruleset 0 and one with ruleset 1 with this crushmap, is this okay ?
While adding/removing hosts frequently we are seeing the stuck/incomplete pgs and the log entries like "reducing the min_size from 2 to 1 may help".  I can see the lot of pg_temp entries in the 'ceph health detail' as well.  I have a suspicion that it may be related to buggy crushmap, is it ?


Thanks & Regards
Somnath



________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 osd.16
device 17 osd.17
device 18 osd.18
device 19 osd.19
device 20 osd.20
device 21 osd.21
device 22 osd.22
device 23 osd.23
device 24 osd.24
device 25 osd.25
device 26 osd.26
device 27 osd.27
device 28 osd.28
device 29 osd.29
device 30 osd.30
device 31 osd.31
device 32 osd.32
device 33 osd.33
device 34 osd.34
device 35 osd.35
device 36 osd.36
device 37 osd.37
device 38 osd.38
device 39 osd.39
device 40 osd.40
device 41 osd.41
device 42 osd.42
device 43 osd.43
device 44 osd.44
device 45 osd.45
device 46 osd.46
device 47 osd.47

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host emsnode12 {
	id -5		# do not change unnecessarily
	# weight 54.720
	alg straw
	hash 0	# rjenkins1
	item osd.0 weight 6.840
	item osd.1 weight 6.840
	item osd.2 weight 6.840
	item osd.4 weight 6.840
	item osd.5 weight 6.840
	item osd.7 weight 6.840
	item osd.6 weight 6.840
	item osd.47 weight 6.840
}
host emsnode10 {
	id -10		# do not change unnecessarily
	# weight 54.720
	alg straw
	hash 0	# rjenkins1
	item osd.40 weight 6.840
	item osd.41 weight 6.840
	item osd.42 weight 6.840
	item osd.43 weight 6.840
	item osd.44 weight 6.840
	item osd.45 weight 6.840
	item osd.46 weight 6.840
	item osd.30 weight 6.840
}
chassis chassis1 {
	id -2		# do not change unnecessarily
	# weight 109.440
	alg straw
	hash 0	# rjenkins1
	item emsnode12 weight 54.720
	item emsnode10 weight 54.720
}
host emsnode3 {
	id -6		# do not change unnecessarily
	# weight 54.720
	alg straw
	hash 0	# rjenkins1
	item osd.8 weight 6.840
	item osd.9 weight 6.840
	item osd.10 weight 6.840
	item osd.11 weight 6.840
	item osd.12 weight 6.840
	item osd.13 weight 6.840
	item osd.14 weight 6.840
	item osd.15 weight 6.840
}
host emsnode4 {
	id -8		# do not change unnecessarily
	# weight 54.720
	alg straw
	hash 0	# rjenkins1
	item osd.29 weight 6.840
	item osd.31 weight 6.840
	item osd.24 weight 6.840
	item osd.26 weight 6.840
	item osd.27 weight 6.840
	item osd.28 weight 6.840
	item osd.3 weight 6.840
	item osd.25 weight 6.840
}
chassis chassis2 {
	id -3		# do not change unnecessarily
	# weight 109.440
	alg straw
	hash 0	# rjenkins1
	item emsnode3 weight 54.720
	item emsnode4 weight 54.720
}
host emsnode11 {
	id -7		# do not change unnecessarily
	# weight 54.720
	alg straw
	hash 0	# rjenkins1
	item osd.16 weight 6.840
	item osd.17 weight 6.840
	item osd.18 weight 6.840
	item osd.19 weight 6.840
	item osd.20 weight 6.840
	item osd.21 weight 6.840
	item osd.23 weight 6.840
	item osd.22 weight 6.840
}
host emsnode5 {
	id -9		# do not change unnecessarily
	# weight 54.720
	alg straw
	hash 0	# rjenkins1
	item osd.32 weight 6.840
	item osd.33 weight 6.840
	item osd.34 weight 6.840
	item osd.35 weight 6.840
	item osd.36 weight 6.840
	item osd.37 weight 6.840
	item osd.38 weight 6.840
	item osd.39 weight 6.840
}
chassis chassis3 {
	id -4		# do not change unnecessarily
	# weight 109.440
	alg straw
	hash 0	# rjenkins1
	item emsnode11 weight 54.720
	item emsnode5 weight 54.720
}
root default {
	id -1		# do not change unnecessarily
	# weight 328.320
	alg straw
	hash 0	# rjenkins1
	item chassis1 weight 109.440
	item chassis2 weight 109.440
	item chassis3 weight 109.440
}

# rules
rule replicated_ruleset {
	ruleset 0
	type replicated
	min_size 1
	max_size 10
	step take default
	step chooseleaf firstn 0 type host
	step emit
}
rule chassis_ruleset {
	ruleset 1
	type replicated
	min_size 1
	max_size 10
	step take default
	step chooseleaf firstn 0 type chassis
	step emit
}

# end crush map

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux