On Wed, 29 Jan 2020 at 22:44, Louwrentius <louwrentius@xxxxxxxxx> wrote: > > Hello, > > I've done some benchmarks with FIO of entire SSD devices. So the FIO > benchmark stops when the whole device has been read/written. I've > logged latency and iops for the entire run. > > Those logs are then translated to graphs. The Intel SSD shows the kind > of graph I would expect. The Samsung and Kingston SSDs show 'strange' > results. > > I've written a brief blog article about this which includes links to > the raw data and the images. > https://louwrentius.com/difference-of-behavior-in-sata-solid-state-drives.html > > Does anybody have an idea what could be going on? Why do we see these > 'golden gate bridge' patterns? Maybe I did something wrong? Your job seems to be doing small (4K) randrw norandommap... Isn't this the sort of effect seen when garbage collection (http://codecapsule.com/2014/02/12/coding-for-ssds-part-3-pages-blocks-and-the-flash-translation-layer/ ) kicks in? Have you been doing a secure erase (or not quite so good due to unpredictability but still better than nothing - a trim) before starting your benchmarking? See https://www.snia.org/sites/default/education/tutorials/2011/fall/SolidState/EstherSpanjer_The_Why_How_SSD_Performance_Benchmarking.pdf for an overview and https://www.snia.org/tech_activities/standards/curr_standards/pts for a list of specs talking about the lengths that can be done to try and make comparisons fair... -- Sitsofe | http://sucs.org/~sits/