Re: EC profiles where m>k (EC 8+12)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, thanks for your reply!

Stretch mode is obviously useful with small pools, but with its size of 4 this is a 25% efficiency and we can't afford it (buying 16 PiB raw for 4 PiB net, it's quite hard to justify to budget holders...).

Good to hear that you used such EC setup in prod, thanks for sharing!

Cheers,

F.


On 3/24/23 13:11, Eugen Block wrote:
Hi,

we have multiple customers with such profiles, for example one with k7 m11 for a two-site cluster (in total 20 nodes). The customer is pretty happy with the resiliency because they actually had multiple outages of one DC and everything was still working fine. Although there's also the stretch mode (which I haven't tested properly yet) I can encourage you to use such a profile. Just be advised to properly test your crush rule. ;-)

Regards,
Eugen

Zitat von Fabien Sirjean <fsirjean@xxxxxxxxxxxx>:

Hi Ceph users!

I've been proposed an interesting EC setup I hadn't thought about before.

Scenario is : we have two server rooms and want to store ~4PiB with the ability to loose 1 server room without loss of data or RW availability.

For the context, performance is not needed (cold storage mostly, used as a big filesystem).

The idea is to use EC 8+12 over 24 servers (12 on each server room), so if we loose 1 room we still have half of the EC parts (10/20) and are able to loose 2 more servers before reaching the point where we loose data.

I find this pretty elegant when working on a two-sites context, as efficiency is 40% (better than 33% three times replication) and the redundancy is good.

What do you think of this setup ? Did you ever used EC profiles with M > K ?

Thanks for sharing your thoughts!

Cheers,

Fabien
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux