>>Hi Alexandre, >> >>nice to meet you here ;-) Hi Udo! (Udo from proxmox ? ;) >>With 3 hosts only you can't survive an full node failure, because for >>that you need >>host >= k + m. >>And k:1 m:2 don't make any sense. >> >>I start with 5 hosts and use k:3, m:2. In this case two hdds can fail or >>one host can be down for maintenance. Ok, thanks ! With loic explain too, It's clear now ! ----- Mail original ----- De: "Udo Lembke" <ulembke@xxxxxxxxxxxx> À: "aderumier" <aderumier@xxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx> Envoyé: Dimanche 1 Février 2015 19:38:55 Objet: Re: erasure code : number of chunks for a small cluster ? Hi Alexandre, nice to meet you here ;-) With 3 hosts only you can't survive an full node failure, because for that you need host >= k + m. And k:1 m:2 don't make any sense. I start with 5 hosts and use k:3, m:2. In this case two hdds can fail or one host can be down for maintenance. Udo PS: you also can't change k+m on a pool later... On 01.02.2015 18:15, Alexandre DERUMIER wrote: > Hi, > > I'm currently trying to understand how to setup correctly a pool with erasure code > > > https://ceph.com/docs/v0.80/dev/osd_internals/erasure_coding/developer_notes/ > > > My cluster is 3 nodes with 6 osd for each node (18 osd total). > > I want to be able to survive of 2 disk failures, but also a full node failure. > > What is the best setup for this ? Does I need M=2 or M=6 ? > > > > > Also, how to determinate the best chunk number ? > > for example, > K = 4 , M=2 > K = 8 , M=2 > K = 16 , M=2 > > you can loose which each config 2 osd, but the more data chunks you have, the less space is used by coding chunks right ? > Does the number of chunk have performance impact ? (read/write ?) > > Regards, > > Alexandre > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com