Re: EC profiles where m>k (EC 8+12)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A custom CRUSH rule can have two steps to enforce that.

> On Mar 24, 2023, at 11:04, Danny Webb <Danny.Webb@xxxxxxxxxxxxxxx> wrote:
> 
> The question I have regarding this setup is, how can you guarantee that the 12 m chunks will be located evenly across the two rooms.  What would happen if by chance all 12 chunks were in room B?  Usually you use failure domains to make sure of the distribution of chunks across domains, but you can't do that here as you are using host as failure domain, but need room to be somehow included in that.
> ________________________________
> From: Fabien Sirjean <fsirjean@xxxxxxxxxxxx>
> Sent: 24 March 2023 12:00
> To: ceph-users <ceph-users@xxxxxxx>
> Subject:  EC profiles where m>k (EC 8+12)
> 
> CAUTION: This email originates from outside THG
> 
> Hi Ceph users!
> 
> I've been proposed an interesting EC setup I hadn't thought about before.
> 
> Scenario is : we have two server rooms and want to store ~4PiB with the
> ability to loose 1 server room without loss of data or RW availability.
> 
> For the context, performance is not needed (cold storage mostly, used as
> a big filesystem).
> 
> The idea is to use EC 8+12 over 24 servers (12 on each server room), so
> if we loose 1 room we still have half of the EC parts (10/20) and are
> able to loose 2 more servers before reaching the point where we loose data.
> 
> I find this pretty elegant when working on a two-sites context, as
> efficiency is 40% (better than 33% three times replication) and the
> redundancy is good.
> 
> What do you think of this setup ? Did you ever used EC profiles with M > K ?
> 
> Thanks for sharing your thoughts!
> 
> Cheers,
> 
> Fabien
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> Danny Webb
> Principal OpenStack Engineer
> Danny.Webb@xxxxxxxxxxxxxxx
> [THG Ingenuity Logo]<https://www.thg.com>
> [https://i.imgur.com/wbpVRW6.png]<https://www.linkedin.com/company/thgplc/?originalSubdomain=uk>  [https://i.imgur.com/c3040tr.png] <https://twitter.com/thgplc?lang=en>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux