Hi,
I just deployed a test cluster to try that out, too. I only deployed
three MONs, but this should also apply.
I tried to create the third datacenter and put the tiebreaker there but got
the following error:
----------------------------------------------------
root@ceph-node-01:/home/clouduser# ceph mon enable_stretch_mode
ceph-node-05 stretch_rule datacenter
Error EINVAL: there are 3datacenter's in the cluster but stretch mode
currently only works with 2!
You don't create a third datacenter within the osd tree, you just tell
ceph that your tie-breaker is in a different dc. For me it worked, I
have two DCs and put the third (tie-breaker) into (virtual) dc3:
pacific1:~ # ceph mon set_location pacific3 datacenter=dc3
pacific1:~ # ceph mon enable_stretch_mode pacific3 stretch_rule datacenter
This automatically triggered pool size 4 and distributed the PGs
evenly across both DCs.
Regards,
Eugen
Zitat von Felix O <hostorig@xxxxxxxxx>:
Hello,
I'm trying to deploy my test ceph cluster and enable stretch mode (
https://docs.ceph.com/en/latest/rados/operations/stretch-mode/). My problem
is enabling the stretch mode.
----------------------------------------------------
$ ceph mon enable_stretch_mode ceph-node-05 stretch_rule datacenter
Error EINVAL: Could not find location entry for datacenter on monitor
ceph-node-05
----------------------------------------------------
ceph-node-5 is the tiebreaker monitor
I tried to create the third datacenter and put the tiebreaker there but got
the following error:
----------------------------------------------------
root@ceph-node-01:/home/clouduser# ceph mon enable_stretch_mode
ceph-node-05 stretch_rule datacenter
Error EINVAL: there are 3datacenter's in the cluster but stretch mode
currently only works with 2!
----------------------------------------------------
An additional info:
----------------------------------------------------
Setup method: cephadm (https://docs.ceph.com/en/latest/cephadm/install/)
# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.03998 root default
-11 0.01999 datacenter site1
-5 0.00999 host ceph-node-01
0 hdd 0.00999 osd.0 up 1.00000 1.00000
-3 0.00999 host ceph-node-02
1 hdd 0.00999 osd.1 up 1.00000 1.00000
-12 0.01999 datacenter site2
-9 0.00999 host ceph-node-03
3 hdd 0.00999 osd.3 up 1.00000 1.00000
-7 0.00999 host ceph-node-04
2 hdd 0.00999 osd.2 up 1.00000 1.00000
stretch_rule is added to the crush
# ceph mon set_location ceph-node-01 datacenter=site1
# ceph mon set_location ceph-node-02 datacenter=site1
# ceph mon set_location ceph-node-03 datacenter=site2
# ceph mon set_location ceph-node-04 datacenter=site2
# ceph versions
{
"mon": {
"ceph version 16.2.1 (afb9061ab4117f798c858c741efa6390e48ccf10)
pacific (stable)": 5
},
"mgr": {
"ceph version 16.2.1 (afb9061ab4117f798c858c741efa6390e48ccf10)
pacific (stable)": 2
},
"osd": {
"ceph version 16.2.1 (afb9061ab4117f798c858c741efa6390e48ccf10)
pacific (stable)": 4
},
"mds": {},
"overall": {
"ceph version 16.2.1 (afb9061ab4117f798c858c741efa6390e48ccf10)
pacific (stable)": 11
}
}
Thank you for your support.
--
Best regards,
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx