Curious randwrite results on raid10 md device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I've been running some tests on a bunch of disks lately and came across
a strange result that I'm struggling to explain.

I built a 20-device raid10 md array with SSDs and ran several fio tests
on it. The randwrite test really stands out because the IOPS starts out
fairly low, around 800 IOPS, but within a few seconds climbs up to about
70K IOPS, which is nearly three times higher than the IOPS I'm getting
with the randread test (25K).

I tried disabling the drive caches using hdparm -W 0, and also disabling
the internal write-intent bitmap on the md array, but the results are
the same.

Here are the test parameters:

ioengine=libaio
invalidate=1
ramp_time=10
iodepth=4
runtime=60
time_based
direct=1
randrepeat=0
filename=/dev/md1
bs=4k
rw=randwrite

I generated some gnuplot graphs from the logs:

randread test: https://imgur.com/dj25FFH
randwrite test: https://imgur.com/BuZVNmh


Thanks,

-- Jerome
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux