Hi,
El 9/5/22 a las 11:05, Maximilian Hill escribió:
sure, let's say min_size is 2 (crush rule and pool).
This would also allow me to read and write, when a site is down and would be able to recover, when the site comes back. Is there any more to stretch mode, or is this essentially it?
I'm just wondering, if the magic is happening in stretch mode itself, or the other actions to take according to the documentation.
What you are missing from stretch mode is that your CRUSH rule wouldn't
guarantee at least one copy in surviving room (min_size=2 can be
achieved with 2 copies in lost room). You may lose data until room is
recovered (no copies present), or you may have blocked I/O until a
second copy is peered in surviving room. You would need size=4/min=3 for
having at least one guaranteed copy in each room; but I/O would block
until min size is adjusted when room is lost.
Also surviving room will have blocked I/O is an OSD crashes/reboots etc.
until you adjust min_size.
Cheers
On May 9, 2022 9:37:35 AM GMT+02:00, Eneko Lacunza<elacunza@xxxxxxxxx> wrote:
Hi Maximilian,
El 7/5/22 a las 19:17, Maximilian Hill escribió:
This would mean, if I already had a crush rule, like the following, I wouldn't really need to enable stretch mode:
type replicated
min_size 4
max_size 4
step take room0 ssd
step chooseleaf firstn 2 type host
step emit
step take room1 ssd
step chooseleaf firstn -2 type host
step emit
Am I right about that?
With your configuration, if one DC goes down all I/O will stop (min_size=4!)
Further, I/O will stop even with one OSD down, for the same reason.
Enabling stretch mode will keep I/O going without intervention.
EnekoLacunza
Director Técnico | Zuzendari teknikoa
Binovo IT Human Project
943 569 206 <tel:943 569 206>
elacunza@xxxxxxxxx <mailto:elacunza@xxxxxxxxx>
binovo.es <//binovo.es>
Astigarragako Bidea, 2 - 2 izda. Oficina 10-11, 20180 Oiartzun
youtube <https://www.youtube.com/user/CANALBINOVO/>
linkedin <https://www.linkedin.com/company/37269706/>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx