Re: HDD vs SSD without explanation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 16/01/18 23:14, Neto pr wrote:
2018-01-15 20:04 GMT-08:00 Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx>:
On 16/01/18 13:18, Fernando Hevia wrote:



The 6 Gb/s interface is capable of a maximum throughput of around 600
Mb/s. None of your drives can achieve that so I don't think you are limited
to the interface speed. The 12 Gb/s interface speed advantage kicks in when
there are several drives installed and it won't make a diference in a single
drive or even a two drive system.

But don't take my word for it. Test your drives throughput with the
command Justin suggested so you know exactly what each drive is capable of:

     Can you reproduce the speed difference using dd ?
     time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K
     skip=$((128*$RANDOM/32)) # set bs to optimal_io_size


While common sense says SSD drive should outperform the mechanical one,
your test scenario (large volume sequential reads) evens out the field a
lot. Still I would have expected somewhat similar results in the outcome, so
yes, it is weird that the SAS drive doubles the SSD performance. That is why
I think there must be something else going on during your tests on the SSD
server. It can also be that the SSD isn't working properly or you are
running an suboptimal OS+server+controller configuration for the drive.

I would second the analysis above - unless you see your read MB/s slammed up
against 580-600MB/s contunuously then the interface speed is not the issue.
We have some similar servers that we replaced 12x SAS with 1x SATA 6 GBit/s
(Intel DC S3710) SSD...and the latter way outperforms the original 12 SAS
drives.

I suspect the problem is the particular SSD you have - I have benchmarked
the 256GB EVO variant and was underwhelmed by the performance. These
(budget) triple cell nand SSD seem to have highly variable read and write
performance (the write is all about when the SLC nand cache gets
full)...read I'm not so sure of - but it could be crappy chipset/firmware
combination. In short I'd recommend *not* using that particular SSD for a
database workload. I'd recommend one of the Intel Datacenter DC range (FWIW
I'm not affiliated with Intel in any way...but their DC stuff works well).

regards

Mark
Hi Mark
In other forums one person said me that on samsung evo should be
partition aligned to 3072 not  default 2048 , to start on erase block
bounduary .  And fs block should be 8kb. I am studing this too. Some
DBAs have reported in other situations that the SSDs when they are
full, are very slow. Mine is 85% full, so maybe that is also
influencing. I'm disappointed with this SSD from Samsung, because in
theory, the read speed of an SSD should be more than 300 times faster
than an HDD and this is not happening.



Interesting - I didn't try changing the alignment. However I could get the rated write and read performance on simple benchmarks (provided it was in a PCIe V3 slot)...so figured it was ok with the default aligning. However once more complex workloads were attempted (databases and distributed object store) the performance was disappointing.

If the SSD is 85% full that will not help either (also look at the expected lifetime of these EVO's - not that great for a server)!

One thing worth trying is messing about with the IO scheduler: if you are using noop, then try deadline (like I said crappy firmware)...

Realistically, I'd recommend getting an enterprise/DC SSD (put the EVO in your workstation, it will be quite nice there)!

Cheers
Mark




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux