Re: Erasure Encoding Chunks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 05/12/2014 16:21, Nick Fisk wrote:> Hi All,
> 
>  
> 
> Does anybody have any input on what the best ratio + total numbers of Data + Coding chunks you would choose?
> 
>  
> 
> For example I could create a pool with 7 data chunks and 3 coding chunks and get an efficiency of 70%, or I could create a pool with 17 data chunks and 3 coding chunks and get an efficiency of 85% with a similar probability of protecting against OSD failure.
> 
>  
> 
> What’s the reason I would choose 10 total chunks over 20 chunks, is it purely down to the overhead of having potentially double the number of chunks per object?

Hi Nick,

Assuming you have a large number of OSD (a thousand or more) with cold data, 20 is probably better. When you try to read the data it involves 20 OSDs instead of 10 but you probably don't care if reads are rare. 

Disclaimer : I'm a developer not an architect ;-) It would help to know the target use case, the size of the data set and the expected read/write rate.

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux