Bigger picture 'ceph web calculator', was Re: SATA vs SAS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This topic comes up often enough, maybe it's time for one of those 'web
calculators'.  One that accepts the user who knows their goals but not
ceph-fu,  entering the importance of various factors (my suggested
factors:  read freq/stored tb, write freq/stored tb, unreplicated tb
needed, min target days between first failure and cluster failure). 
Then the handy calculator spits out a few ceph configs that shows an
'optimized' layout for their goal, what it would look like if each of
their factors was 'a little more' and 'a little less'.   The calculator
would spit out 'x ssds of size x, y 7200rpm MTBF q, z 5400, sas xx,
using aa hosts with not less than Y gb and P cores of not less than Ghz
single threaded performance/core; ceph configured as mirrors/erasure
etc. etc.  With target expected cost.  That would be a service folks
would pay for I think.   It would be the answer to the question 'what
would it take for  ceph to deliver X?'    Folks would notice quickly
whether they really need 1 cluster, or two of very different performance
goals, etc.




On 8/21/21 12:46 PM, Roland Giesler wrote:
> Hi all,
>
> (I asked this on the Proxmox forums, but I think it may be more
> appropriate here.)
>
> In your practical experience, when I choose new hardware for a
> cluster, is there any noticable difference between using SATA or SAS
> drives. I know SAS drives can have a 12Gb/s interface and I think SATA
> can only do 6Gb/s, but in my experience the drives themselves can't
> write at 12Gb/s anyway, so it makes little if any difference.
>
> I use a combination of SSD's and SAS drives in my current cluster (in
> different ceph pools), but I suspect that if I choose SATA enterprise
> class drives for this project, it will get the same level of
> performance.
>
> I think with ceph the hard error rate of drives becomes less relevant
> that if I had used some level of RAID.
>
> Also, if I go with SATA, I can use AMD Epyc processors (and I don't
> want to use a different supplier), which gives me a lot of extra cores
> per unit at a lesser price, which of course all adds up to a better
> deal in the end.
>
> I'd like to specifically hear from you what your experience is in this regard.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux