Re: Curious randwrite results on raid10 md device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5 February 2018 at 04:33, Jérôme Charaoui
<jcharaoui@xxxxxxxxxxxxxxxxxx> wrote:
>
> I've been running some tests on a bunch of disks lately and came across
> a strange result that I'm struggling to explain.
>
> I built a 20-device raid10 md array with SSDs and ran several fio tests
> on it. The randwrite test really stands out because the IOPS starts out
> fairly low, around 800 IOPS, but within a few seconds climbs up to about
> 70K IOPS, which is nearly three times higher than the IOPS I'm getting
> with the randread test (25K).
>
> I tried disabling the drive caches using hdparm -W 0, and also disabling
> the internal write-intent bitmap on the md array, but the results are
> the same.

You didn't include the fio job you're using (see
https://github.com/axboe/fio/blob/fio-3.3/REPORTING-BUGS ) so it's
impossible to say anything too useful. I'd check whether a solo SSD by
itself exhibits similar variation. It's also worth noting the order
you read data back compared to how it was written can impact an SSD
(see http://codecapsule.com/2014/02/12/coding-for-ssds-part-5-access-patterns-and-system-optimizations/
) but I wouldn't have thought it should be to such a degree. Perhaps
http://vger.kernel.org/vger-lists.html#linux-raid or
http://vger.kernel.org/vger-lists.html#linux-block might give better
replies...

-- 
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux