Re: erasure coded pool k=7,m=5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stéphane,

On 23/12/2014 14:34, Stéphane DUGRAVOT wrote:
> Hi all,
> 
> Soon, we should have a 3 datacenters (dc) ceph cluster with 4 hosts in each dc. Each host will have 12 OSD.
> 
> We can accept the loss of one datacenter and one host on the remaining 2 datacenters.
> In order to use erasure coded pool :
> 
>  1. Is the solution for a strategy k = 7, m = 5 is acceptable ?

If you want to sustain the loss of one datacenter, k=2,m=1 is what you want, with a ruleset that require that no two shards must be in the same datacenter. It also sustains the loss of one host within a datacenter: the missing chunk on the lost host will be reconstructed using the two other chunks from the two other datacenter.

If, in addition, you want to sustain the loss of one machine while a datacenter is down, you would need to use the LRC plugin.

>  2. Is this is the only one that guarantees us our premise ?
>  3. And more generally, is there a formula (based on the number of dc, host and OSD) that allows us to calculate the profile ?

I don't think there is such a formula.

Cheers

> Thanks.
> Stephane.
> 
> -- 
> *Université de Lorraine**/
> /*Stéphane DUGRAVOT - Direction du numérique - Infrastructure
> Jabber : /stephane.dugravot@xxxxxxxxxxxxxxxx/
> Tél.: /+33 3 83 68 20 98/
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux