Re: Combining erasure coding and replication?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Brett,

I'm far from being an expert, but you may consider rbd-mirroring between EC-pools.

Cheers,
Lars

Am Fri, 27 Mar 2020 06:28:02 +0000
schrieb Brett Randall <brett.randall@xxxxxxxxx>:

> Hi all
> 
> Had a fun time trying to join this list, hopefully you don’t get this message 3 times!
> 
> On to Ceph… We are looking at setting up our first ever Ceph cluster to replace Gluster as our media asset storage and production system. The Ceph cluster will have 5pb of usable storage. Whether we use it as object-storage, or put CephFS in front of it, is still TBD.
> 
> Obviously we’re keen to protect this data well. Our current Gluster setup utilises RAID-6 on each of the nodes and then we have a single replica of each brick. The Gluster bricks are split between buildings so that the replica is guaranteed to be in another premises. By doing it this way, we guarantee that we can have a decent number of disk or node failures (even an entire building) before we lose both connectivity and data.
> 
> Our concern with Ceph is the cost of having three replicas. Storage may be cheap but I’d rather not buy ANOTHER 5pb for a third replica if there are ways to do this more efficiently. Site-level redundancy is important to us so we can’t simply create an erasure-coded volume across two buildings – if we lose power to a building, the entire array would become unavailable. Likewise, we can’t simply have a single replica – our fault tolerance would drop way down on what it is right now.
> 
> Is there a way to use both erasure coding AND replication at the same time in Ceph to mimic the architecture we currently have in Gluster? I know we COULD just create RAID6 volumes on each node and use the entire volume as a single OSD, but that this is not the recommended way to use Ceph. So is there some other way?
> 
> Apologies if this is a nonsensical question, I’m still trying to wrap my head around Ceph, CRUSH maps, placement rules, volume types, etc etc!
> 
> TIA
> 
> Brett
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



-- 
                            Informationstechnologie
Berlin-Brandenburgische Akademie der Wissenschaften
Jägerstraße 22-23                      10117 Berlin
Tel.: +49 30 20370-352           http://www.bbaw.de
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux