Any concerns using EC with CLAY in Quincy (or Pacific)?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



HI all, 

We've deployed a new cluster on Quincy 17.2.3 with 260x 18TB spinners across 11 chassis that will be used exclusively in the next year or so as a S3 store. 100Gb per chassis shared by both cluster and public networks, NVMe DB/WAL, 32 phys cores @ 2.3Ghz base, 192GB chassis ram (per 24 OSDs).

We're looking to use the clay ec plugin for our rgw (data) pool, as it appears to use less reads in recovery, and might be beneficial. I'm going to be benchmarking recovery scenarios ahead of production, but that of course doesn't give a view on longer-term reliability. :)  Anyone hear of any bad experiences, or any reason not to use over jerasure? Any reason to use cauchy-good instead of reed-solomon for the use case above?


Ngā mihi,

Sean Matheny
HPC Cloud Platform DevOps Lead
New Zealand eScience Infrastructure (NeSI)

e: sean.matheny@xxxxxxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux