About number of osd node can be failed with erasure code 3+2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Groups,

Recently I was setting up a ceph cluster with 10 nodes 144 osd, and I use S3 for it with pool erasure code EC3+2 on it.

I have a question, how many osd nodes can fail with erasure code 3+2 with cluster working normal (read, write)? and can i choose better erasure code ec7+3, 8+2 etc..?

With the erasure code algorithm, it only ensures no data loss, but does not guarantee that the cluster operates normally and does not block IO when osd nodes down. Is that right?

Thanks to the community.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux