Re: SATA vs SAS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Basically yes, but I would not say supercritical.

If it cannot deliver enough iops for ceph, it will stall even slow consumer hdds, if it is fast enough, the hdd/cpu/network will be the bottleneck, so there is not much to gain after that point.

This is more a warning to check before buying a large amount of ssds, if they do perform well, when used by ceph, as the access and load patterns are quite different than what normal benchmarks compare.


On 8/23/21 1:03 PM, Roland Giesler wrote:
On Mon, 23 Aug 2021 at 00:59, Kai Börnert <kai.boernert@xxxxxxxxx> wrote:
As far as i understand, more important factor (for the ssds) is if they
have power loss protections (so they can use their ondevice write cache)
and how many iops they have when using direct writes with queue depth 1
So what you're saying is that where the WAL is stored is
supercritical, since it could kill performance completely?

I just did a test for a hdd with block.db on ssd cluster using extra
cheap consumer ssds, adding the ssds reduced! the performance by about
1-2 magnitudes

While it is running the benchmark ssds are at 100%io according to
iostat, the hdds are below 10%, the performance is an absolute joke

pinksupervisor:~$ sudo rados bench -p scbench 5 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size
4194304 for up to 5 seconds or 0 objects
Total time run:         15.5223
Total writes made:      21
Write size:             4194304
Object size:            4194304
Bandwidth (MB/sec):     5.41157
Stddev Bandwidth:       3.19595
Max bandwidth (MB/sec): 12
Min bandwidth (MB/sec): 0
Average IOPS:           1
Stddev IOPS:            0.798809
Max IOPS:               3
Min IOPS:               0
Average Latency(s):     11.1352
Stddev Latency(s):      4.79918
Max latency(s):         15.4896
Min latency(s):         1.13759

tl;dr the interface is not that important, a good sata drive can easily
beat a sas drive

On 8/21/21 10:34 PM, Teoman Onay wrote:
You seem to focus only on the controller bandwith while you should also
consider disk rpms. Most SATA drives runs at 7200rpm while SAS ones goes
from 10k to 15k rpm which increases the number of iops.

Sata 80 iops
Sas 10k 120iops
Sas 15k 180iops

MBTF of SAS drives is also higher than SATA ones.

What is your use case ? RGW ?  Small or large files ? RBD ?



On Sat, 21 Aug 2021, 19:47 Roland Giesler, <roland@xxxxxxxxxxxxxx> wrote:

Hi all,

(I asked this on the Proxmox forums, but I think it may be more
appropriate here.)

In your practical experience, when I choose new hardware for a
cluster, is there any noticable difference between using SATA or SAS
drives. I know SAS drives can have a 12Gb/s interface and I think SATA
can only do 6Gb/s, but in my experience the drives themselves can't
write at 12Gb/s anyway, so it makes little if any difference.

I use a combination of SSD's and SAS drives in my current cluster (in
different ceph pools), but I suspect that if I choose SATA enterprise
class drives for this project, it will get the same level of
performance.

I think with ceph the hard error rate of drives becomes less relevant
that if I had used some level of RAID.

Also, if I go with SATA, I can use AMD Epyc processors (and I don't
want to use a different supplier), which gives me a lot of extra cores
per unit at a lesser price, which of course all adds up to a better
deal in the end.

I'd like to specifically hear from you what your experience is in this
regard.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux