Placing Different Pools on Different OSDS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I want to test ceph cache tire. The test cluster has three machines, each
has a ssd and a sata. I've created a crush rule ssd_ruleset to place ssdpool
on ssd osd, but cannot assign pgs to ssds.


root@ceph10:~# ceph osd crush rule list
[
    "replicated_ruleset",
    "ssd_ruleset"]
root@ceph10:~# ceph status
    cluster fa5427de-b0d7-466a-b7cf-90e47eac1642
     health HEALTH_OK
     monmap e2: 2 mons at
{mona=192.168.2.10:6789/0,monb=192.168.2.11:6789/0}, election epoch 4,
quorum 0,1 mona,monb
     osdmap e70: 6 osds: 6 up, 6 in
      pgmap v234: 192 pgs, 3 pools, 0 bytes data, 0 objects
            251 MB used, 5853 GB / 5853 GB avail
                 192 active+clean
root@ceph10:~# ceph osd pool create ssdpool 128 128 replicated ssd_ruleset
pool 'ssdpool' created
root@ceph10:~# ceph osd pool get ssdpool crush_ruleset
crush_ruleset: 0
root@ceph10:~# ceph osd pool set ssdpool crush_ruleset 1
set pool 8 crush_ruleset to 1
root@ceph10:~# ceph status
    cluster fa5427de-b0d7-466a-b7cf-90e47eac1642
     health HEALTH_OK
     monmap e2: 2 mons at
{mona=192.168.2.10:6789/0,monb=192.168.2.11:6789/0}, election epoch 4,
quorum 0,1 mona,monb
     osdmap e73: 6 osds: 6 up, 6 in
      pgmap v245: 320 pgs, 4 pools, 0 bytes data, 0 objects
            4857 MB used, 5849 GB / 5853 GB avail
                 320 active+clean
root@ceph10:/var/log/ceph# rbd list -p ssdpool
^C
root@ceph10:/var/log/ceph# rbd create test --pool ssdpool --size 1024
--image-format 2
^C

The command "rbd list -p ssdpool" and "rbd create test --pool ssdpool --size
1024 --image-format 2" hung.

"ceph pg dump" showed me that no pgs are on ssd osds. Why ?
Why did "ceph osd pool create ssdpool 128 128 replicated ssd_ruleset" create
ssdpool with crush_ruleset 0?
How to set ssd_ruleset when create a pool?


I used the ceph command to control crush map.

ceph osd crush add-bucket ssd root
ceph osd crush add-bucket ssd10 host
ceph osd crush add-bucket ssd11 host
ceph osd crush add-bucket ssd12 host
ceph osd crush move ssd10 root=ssd
ceph osd crush move ssd11 root=ssd
ceph osd crush move ssd11 root=ssd
ceph osd crush rule create-simple ssd_ruleset ssd root

ceph osd crush add-bucket sata10 host
ceph osd crush add-bucket sata11 host
ceph osd crush add-bucket sata12 host
ceph osd crush move sata10 root=default
ceph osd crush move sata11 root=default
ceph osd crush move sata12 root=default

Here is my crush map:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host sata10 {
	id -6		# do not change unnecessarily
	# weight 1.900
	alg straw
	hash 0	# rjenkins1
	item osd.3 weight 1.900
}
host sata11 {
	id -7		# do not change unnecessarily
	# weight 1.900
	alg straw
	hash 0	# rjenkins1
	item osd.4 weight 1.900
}
host sata12 {
	id -8		# do not change unnecessarily
	# weight 1.900
	alg straw
	hash 0	# rjenkins1
	item osd.5 weight 1.900
}
root default {
	id -1		# do not change unnecessarily
	# weight 5.700
	alg straw
	hash 0	# rjenkins1
	item sata10 weight 1.900
	item sata11 weight 1.900
	item sata12 weight 1.900
}
host ssd10 {
	id -2		# do not change unnecessarily
	# weight 1.900
	alg straw
	hash 0	# rjenkins1
	item osd.0 weight 1.900
}
host ssd11 {
	id -4		# do not change unnecessarily
	# weight 1.900
	alg straw
	hash 0	# rjenkins1
	item osd.1 weight 1.900
}
host ssd12 {
	id -5		# do not change unnecessarily
	# weight 1.900
	alg straw
	hash 0	# rjenkins1
	item osd.2 weight 1.900
}
root ssd {
	id -3		# do not change unnecessarily
	# weight 5.700
	alg straw
	hash 0	# rjenkins1
	item ssd10 weight 1.900
	item ssd11 weight 1.900
	item ssd12 weight 1.900
}

# rules
rule replicated_ruleset {
	ruleset 0
	type replicated
	min_size 1
	max_size 10
	step take default
	step chooseleaf firstn 0 type host
	step emit
}
rule ssd_ruleset {
	ruleset 1
	type replicated
	min_size 1
	max_size 10
	step take ssd
	step chooseleaf firstn 0 type root
	step emit
}

# end crush map

Thanks a lot!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux