Re: Stretch cluster questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Eugen,

El 3/5/22 a las 14:01, Eugen Block escribió:

- Can we have multiple pools in a stretch cluster?

yes, you can have multiple pools, but apparently they have to be all configured with the stretch rule as you already noted.

- Can we have multiple different crush rules in a stretch cluster?

It's still a regular ceph cluster, so you can have different crush rules. But the stretch mode requires a specific one so your other rules probably won't be covered by the stretch mode. I haven't had the chance to test the stretch mode on a real cluster yet, so I'm not really sure if it actually supports only one rule. But we've built stretched clusters without the actual stretch mode with previous releases, it's basically just a matter of crush rules so your resiliency rerequirements are met. Just remember that you need a third monitor as a tiebreaker. I would assume that if you configure your cluster with the stretch mode you'll have the advantages of the stretch mode for those pools configured with the strech rule (OSD to OSD communication within one DC), but other DC aware rules would still apply and ensure the configured resiliency. So that would result in some form of mixed mode or something.

Thanks for your input. I'm in the process of testing this feature with 7 VMs, I'll report back if I find something interesting. :)

Cheers,
Eneko


Zitat von Eneko Lacunza <elacunza@xxxxxxxxx>:

Hi all,

We're looking to deploy a stretch cluster for a 2-CPD deployment. I have read the following docs: https://docs.ceph.com/en/latest/rados/operations/stretch-mode/#stretch-clusters

I have some questions:

- Can we have multiple pools in a stretch cluster?
- Can we have multiple different crush rules in a stretch cluster? I'm asking this because the command for stretch mode activation asks for a rule...

We want to have different purpose pools on this Ceph cluster:

- Important VM disks, with 2 copies in each DC (SSD class)
- Ephemeral VM disks, with just 2 copies overall (SSD class)
- Backup data in just one DC (HDD class).

Objective of the 2-DC deployment is disaster recovery, HA isn't required, but I'll take it if deployment is reasonable :-) .

Alternative would be a size=4/min=3 pool for important VM disks in a no-strech cluster...

Thanks

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project

Tel. +34 943 569 206 |https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun

https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux