erasure code : number of chunks for a small cluster ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm currently trying to understand how to setup correctly a pool with erasure code


https://ceph.com/docs/v0.80/dev/osd_internals/erasure_coding/developer_notes/


My cluster is 3 nodes with 6 osd for each node (18 osd total).

I want to be able to survive of 2 disk failures, but also a full node failure.

What is the best setup for this ? Does I need M=2 or M=6 ?




Also, how to determinate the best chunk number ?

for example,
K = 4 , M=2
K = 8 , M=2
K = 16 , M=2

you can loose which each config 2 osd, but the more data chunks you have, the less space is used by coding chunks right ?
Does the number of chunk have performance impact ? (read/write ?)

Regards,

Alexandre




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux