Re: Any concerns using EC with CLAY in Quincy (or Pacific)?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeremy, 

Thanks for the feedback, and good to know that clay has been stable for you. Would you mind sharing what your motivation was going with clay? Was it for the recovery tail performance of clay versus jerasure, or some other reason(s)? Did you happen to do any benchmarking of clay vs erasure (either in normal write and read, or in recovery scenarios)? 

Ngā mihi,

Sean Matheny
HPC Cloud Platform DevOps Lead
New Zealand eScience Infrastructure (NeSI)

e: sean.matheny@xxxxxxxxxxx

> On 12/11/2022, at 9:43 AM, Jeremy Austin <jhaustin@xxxxxxxxx> wrote:
> 
> I'm running 16.2.9 and have been using clay for 3 or 4 years. I can't speak to your scale, but I have had no long term reliability problems at small scale, including one or two hard power-down scenarios. (Alaska power is not too great! Not so much a grid as a very short stepladder.)
> 
> On Thu, Oct 20, 2022 at 12:05 PM Sean Matheny <sean.matheny@xxxxxxxxxxx <mailto:sean.matheny@xxxxxxxxxxx>> wrote:
>> HI all, 
>> 
>> We've deployed a new cluster on Quincy 17.2.3 with 260x 18TB spinners across 11 chassis that will be used exclusively in the next year or so as a S3 store. 100Gb per chassis shared by both cluster and public networks, NVMe DB/WAL, 32 phys cores @ 2.3Ghz base, 192GB chassis ram (per 24 OSDs).
>> 
>> We're looking to use the clay ec plugin for our rgw (data) pool, as it appears to use less reads in recovery, and might be beneficial. I'm going to be benchmarking recovery scenarios ahead of production, but that of course doesn't give a view on longer-term reliability. :)  Anyone hear of any bad experiences, or any reason not to use over jerasure? Any reason to use cauchy-good instead of reed-solomon for the use case above?
>> 
>> 
>> Ngā mihi,
>> 
>> Sean Matheny
>> HPC Cloud Platform DevOps Lead
>> New Zealand eScience Infrastructure (NeSI)
>> 
>> e: sean.matheny@xxxxxxxxxxx <mailto:sean.matheny@xxxxxxxxxxx>
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx <mailto:ceph-users-leave@xxxxxxx>
> 
> 
> -- 
> Jeremy Austin
> jhaustin@xxxxxxxxx <mailto:jhaustin@xxxxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux