Re: Strange benchmark results of SSD - any ideas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the feedback. I have to make an apology to you all. I made
a rather stupid mistake.
I tested the SSDs on a P420i RAID controller (not in HBA mode) and
this seems to cause these strange patterns.

After some feedback, I decided to test the SSDs again on a regular
AHCI SATA 300 controller and the results were as expected.
I've updated my blog post with the new benchmark results. This is an example:

https://raw.githubusercontent.com/louwrentius/fio-plot/master/images/SAMSUNG-PM883-ON-AHCI-SATA-300-FULL-DISK-4K-RANDOM-READ-WRITE-50%25-2020-01-31_144732.png

As to why I test this way (with qd=1): I like to see how SSDs perform
under worst-case performance situations.
Although SSDs can get to crazy IOPs with high queue depths, I wonder
if those queue depths are always encountered in real-life situations.
Probably depends on the workload.

Thanks again for the suggestions.

Op vr 31 jan. 2020 om 08:58 schreef Erwan Velu <e.velu@xxxxxxxxxx>:
>
> A couple of comments on your testing procedure.
>
> - Please version everything you use  : fio, kernel, system, bios, ssd
> firmware
>
> All this can have an impact on your testing.
>
> It's unclear if they all run of the same host/setup or not.
>
>
> - Please expose the device size as the ssd model/brand isn't enough.
>
> The size of the SSD have a huge impact on its performance
>
> You should also expose the DWPD of these drive to detail how much
> endurant they are supposed to be.
>
> The public price could be also a hint for the reader.
>
>
> - Please expose the SMART attributes of these drives
>
> If some are older than other, the wear leveling status could impact the
> performance too
>
>
> - Why do you enforce iodepth to 1 ? SSD devices can perfectly handle
> much more than that, and that what will happen when used by the OS.
>
> I'd suggest to run at least iodepth = 32.
>
>
> - Please expose the device configuration with a sdparm -a (to know what
> are the read & write cache settings on this devices)
>
>
> - Starting from the fresh erased disk would be better to start from a
> clear device (same TRIM state on all devices)
>
>
> On 29/01/2020 23:43, Louwrentius wrote:
> > Hello,
> >
> > I've done some benchmarks with FIO of entire SSD devices. So the FIO
> > benchmark stops when the whole device has been read/written. I've
> > logged latency and iops for the entire run.
> >
> > Those logs are then translated to graphs. The Intel SSD shows the kind
> > of graph I would expect. The Samsung and Kingston SSDs show 'strange'
> > results.
> >
> > I've written a brief blog article about this which includes links to
> > the raw data and the images.
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Flouwrentius.com%2Fdifference-of-behavior-in-sata-solid-state-drives.html&amp;data=02%7C01%7Ce.velu%40criteo.com%7C0bfd1acab1454841fad208d7a50ca983%7C2a35d8fd574d48e3927c8c398e225a01%7C1%7C1%7C637159346115224065&amp;sdata=DzEZDHTfCk6tpEsYacMRETj5rUbRoLRtQu7yUW%2BeV3M%3D&amp;reserved=0
> >
> > Does anybody have an idea what could be going on? Why do we see these
> > 'golden gate bridge' patterns? Maybe I did something wrong?
> >
> > With regards,
> >
> > Louwrentius




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux