Le 25/11/2015 14:37, Emmanuel Lacour a écrit :
Le 24/11/2015 21:48, Gregory Farnum a écrit :
Yeah, this is the old "two copies in one rack, a third copy elsewhere"
replication scheme that lots of stuff likes but CRUSH doesn't really
support. Assuming new enough clients and servers (some of the older
ones barf when you do this), you can do
rule replicate_three_times {
ruleset 1
type replicated
min_size 3
max_size 3
step take default
step choose firstn 2 type rack
step chooseleaf firstn 2 type host
step emit
}
That will pick 2 racks and emit 2 OSDs (on separate hosts) in each,
but the list will get truncated down to three OSDs.
thanks it's exactly what I need and it works !
$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 43.67999 root default
-7 21.84000 rack N3
-2 10.92000 host ceph1
0 5.45999 osd.0 up 1.00000 0.79999
7 5.45999 osd.7 up 1.00000 0.79999
-4 10.92000 host ceph3
3 5.45999 osd.3 up 1.00000 0.79999
4 5.45999 osd.4 up 1.00000 0.79999
-6 21.84000 rack N6
-3 10.92000 host ceph2
1 5.45999 osd.1 up 1.00000 1.00000
2 5.45999 osd.2 up 1.00000 1.00000
-5 10.92000 host ceph4
5 5.45999 osd.5 up 1.00000 1.00000
6 5.45999 osd.6 up 1.00000 1.00000
-8 0 rack N-1
$ ceph osd dump | grep libvirt-pool
pool 7 'libvirt-pool' replicated size 3 min_size 1 crush_ruleset 1
object_hash rjenkins pg_num 256 pgp_num 256 last_change 49548 flags
hashpspool stripe_width 0
$ rados put -p libvirt-pool initrd.img-3.16.0-4-amd64
/boot/initrd.img-3.16.0-4-amd64
$ ceph osd map libvirt-pool initrd.img-3.16.0-4-amd64
osdmap e49549 pool 'libvirt-pool' (7) object 'initrd.img-3.16.0-4-amd64'
-> pg 7.59805ed7 (7.d7) -> up ([5,1,3], p5) acting ([5,1,3], p5)
:)
thought I do not understand what you mean by "but the list will get
truncated down to three OSDs", can you explain this?
Take the default root -> root
Take two racks -> 2 racks
For each rack, pick two hosts. -> 4 hosts
Now, pick a leaf in each host : that would be 4 OSDs, but we are cutting
short to 3 (see min_size/max_size 3) -> 3 OSDs
Voilà !
Loris
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com