Re: Erasure coding best practice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would like to know more about those corner cases and why it’s not recommended to use this approach. Because our customers and we ourselves have been using such profiles for years, including multiple occasions when one of two DCs failed with k7m11. They were quite happy with the resiliency Ceph provided.

Zitat von Anthony D'Atri <aad@xxxxxxxxxxxxxx>:

Just repeating what I read.  I suspect that the effect is minimal.

Back when I did ZFS a lot there was conventional wisdom of a given party group not having more than 9 drives, to keep rebuild and writes semi-manageable.

A few years back someone asserted that EC values with small prime factors are advantageous, so 23,11 would be doubleplus ungood.

I thought it was that K should preferably be a power of two, M as many
as your security demands require.
Also pools should have power-of-two PGs, and bucket shards would be primes.

I could be wrong though.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux