RE: Curious randwrite results on raid10 md device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jerome, 

The performance variation you mention below doesn't sound unusual for a non-preconditioned SSD.  You didn't mention what sort of "preconditioning" you did to the SSDs prior to your measurements.   You may be unaware that SSDs require some usually extensive preconditioning operations to obtain stable results.  Take a look at the SNIA test methodology for a pretty good explanation (http://www.snia.org/sites/default/files/SSS_PTS_Enterprise_v1.1.pdf) I think starting around page 18.  
Also, I think all SSDs are always write cache enabled, the write cache bit is ignored.

Kris Davis

-----Original Message-----
From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On Behalf Of Sitsofe Wheeler
Sent: Tuesday, February 6, 2018 8:17 AM
To: Jérôme Charaoui <jcharaoui@xxxxxxxxxxxxxxxxxx>
Cc: fio <fio@xxxxxxxxxxxxxxx>
Subject: Re: Curious randwrite results on raid10 md device

On 5 February 2018 at 04:33, Jérôme Charaoui <jcharaoui@xxxxxxxxxxxxxxxxxx> wrote:
>
> I've been running some tests on a bunch of disks lately and came 
> across a strange result that I'm struggling to explain.
>
> I built a 20-device raid10 md array with SSDs and ran several fio 
> tests on it. The randwrite test really stands out because the IOPS 
> starts out fairly low, around 800 IOPS, but within a few seconds 
> climbs up to about 70K IOPS, which is nearly three times higher than 
> the IOPS I'm getting with the randread test (25K).
>
> I tried disabling the drive caches using hdparm -W 0, and also 
> disabling the internal write-intent bitmap on the md array, but the 
> results are the same.

You didn't include the fio job you're using (see https://github.com/axboe/fio/blob/fio-3.3/REPORTING-BUGS ) so it's impossible to say anything too useful. I'd check whether a solo SSD by itself exhibits similar variation. It's also worth noting the order you read data back compared to how it was written can impact an SSD (see http://codecapsule.com/2014/02/12/coding-for-ssds-part-5-access-patterns-and-system-optimizations/
) but I wouldn't have thought it should be to such a degree. Perhaps http://vger.kernel.org/vger-lists.html#linux-raid or http://vger.kernel.org/vger-lists.html#linux-block might give better replies...

--
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html
��.n��������+%������w��{.n�������^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�

[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux