IIRC this will create a rule that tries to selects n independent data centers
Check the actual generated rule to validate this.
I think the only way to express "3 copies across two data centers" is by explicitly
using the two data centers in the rule as in:
(pseudo code)
take dc1
chooseleaf 1 type host
emit
take dc2
chooseleaf 2 type host
emit
Which will always place 1 on dc1 and 2 in dc2. A rule like
take default
choose 2 type datacenter
chooseleafe 2 type host
emit
will select a total of 4 hosts in two different data centers (2 hosts per dc)
But the real problem here is that 2 data centers in one Ceph cluster is just
a poor fit for Ceph in most scenarios. 3 would be fine. Two independent
clusters and async rbd-mirror or rgw synchronization would also be fine.
But one cluster in two data centers and replicating via CRUSH just isn't
how it works.
Maybe you are looking for something like "3 independent racks" and you happen
to have two racks in each dc? Really depends on your setup and requirements.
Paul
2018-08-13 14:09 GMT+02:00 Torsten Casselt <casselt@xxxxxxxxxxxxxxxxxxxx>:
Hi,
I created a rule with this command:
ceph osd crush rule create-replicated rrd default datacenter
Since chooseleaf type is 1, I expected it to distribute the copies
evenly on two datacenters with six hosts each. For example, six copies
would mean a copy on each host.
When I test the resulting CRUSH map with the crush tool I get bad
mappings. PGs stay in active+clean+remapped and
active+undersized+remapped. I thought it might help if I increase the
choose tries, but it stays the same.
What is the best method to distribute at least three copies over two
datacenters? Since the docs state that it is rarely needed to decompile
the CRUSH map, I thought it must be possible with a rule create command
like above. I don’t think it is that rare to have two sites…
Thanks!
Torsten
--
Torsten Casselt, IT-Sicherheit, Leibniz Universität IT Services
Tel: +49-(0)511-762-799095 Schlosswender Str. 5
Fax: +49-(0)511-762-3003 D-30159 Hannover
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com