Re: Stretch mode size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for your comments, Sake and David.
Depending on the customer's budget we'll then either run some tests with the documented stretch mode or build our own stretch mode in "legacy" style by creating a suitable crush rule.

Thanks!
Eugen

Zitat von Sake Ceph <ceph@xxxxxxxxxxx>:

I believe they are working on it or want to work on it to revert from a stretched cluster, because of the reason you mention: if the other datacenter is totally burned down, you maybe want for the time being switch to one datacenter setup.

Best regards,
Sake
Op 09-11-2023 11:18 CET schreef Eugen Block <eblock@xxxxxx>:


Hi,

I'd like to ask for confirmation how I understand the docs on stretch
mode [1]. It requires exact size 4 for the rule? Other sizes are not
supported/won't work, for example size 6? Are there clusters out there
which use this stretch mode?
Once stretch mode is enabled, it's not possible to get out of it. How
would one deal with a burnt down datacenter which can take months to
rebuild? In a "self-managed" stretch cluster (let's say size 6) I
could simply change the crush rule to not consider the failed
datacenter anymore, deploy an additional mon somewhere and maybe
reduce the size/min_size. Am I missing something?

Thanks,
Eugen

[1] https://docs.ceph.com/en/reef/rados/operations/stretch-mode/#id2

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux