Bad performance when reads and writes on same LUN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi fio users and developers,

I am working on a setup for a customer with a very specific FIO
configuration:
We have to use 8 LUNs (from a very performant storage system, by the
way), doing three readers and one writer per LUN.

So we use 8 fio config files, each of them look the same, except the
directory= entry:

[global] 
directory=/n1-dm-0
rw=randread  
ioengine=libaio  
iodepth=4 
size=1024m  
invalidate=1 
direct=0 
runtime=60  
time_based  
 
[Reader-1] 
 
[Reader-2] 
 
[Reader-3] 
 
[Writer]
rw=randwrite
ioengine=libaio
iodepth=32
direct=1


When running this configurations (8 fio's in parallel) I receive these
results (grep'ed iops from the 8 out-files):
  read : io=128520KB, bw=2141.1KB/s, iops=535 , runt= 60001msec
  read : io=127048KB, bw=2117.5KB/s, iops=529 , runt= 60001msec
  read : io=126816KB, bw=2113.6KB/s, iops=528 , runt= 60001msec
  write: io=3811.8MB, bw=65052KB/s, iops=16263 , runt= 60001msec
  read : io=123704KB, bw=2061.7KB/s, iops=515 , runt= 60004msec
  read : io=123656KB, bw=2060.9KB/s, iops=515 , runt= 60002msec
  read : io=122924KB, bw=2048.7KB/s, iops=512 , runt= 60004msec
  write: io=3761.8MB, bw=64187KB/s, iops=16046 , runt= 60002msec
  read : io=127636KB, bw=2127.3KB/s, iops=531 , runt= 60001msec
  read : io=126440KB, bw=2107.4KB/s, iops=526 , runt= 60001msec
  read : io=125612KB, bw=2093.6KB/s, iops=523 , runt= 60001msec
  write: io=3832.4MB, bw=65403KB/s, iops=16350 , runt= 60002msec
  read : io=125344KB, bw=2089.4KB/s, iops=522 , runt= 60001msec
  read : io=125284KB, bw=2088.4KB/s, iops=522 , runt= 60001msec
  read : io=125080KB, bw=2084.7KB/s, iops=521 , runt= 60001msec
  write: io=3784.8MB, bw=64592KB/s, iops=16147 , runt= 60001msec
  read : io=127656KB, bw=2127.6KB/s, iops=531 , runt= 60001msec
  read : io=127144KB, bw=2119.4KB/s, iops=529 , runt= 60001msec
  read : io=126248KB, bw=2104.1KB/s, iops=526 , runt= 60001msec
  write: io=3828.9MB, bw=65343KB/s, iops=16335 , runt= 60002msec
  read : io=124236KB, bw=2070.6KB/s, iops=517 , runt= 60001msec
  read : io=123908KB, bw=2065.2KB/s, iops=516 , runt= 60001msec
  read : io=123352KB, bw=2055.9KB/s, iops=513 , runt= 60001msec
  write: io=3764.1MB, bw=64253KB/s, iops=16063 , runt= 60001msec
  read : io=127784KB, bw=2129.8KB/s, iops=532 , runt= 60001msec
  read : io=127276KB, bw=2121.3KB/s, iops=530 , runt= 60001msec
  read : io=127240KB, bw=2120.7KB/s, iops=530 , runt= 60001msec
  write: io=3839.5MB, bw=65524KB/s, iops=16380 , runt= 60002msec
  read : io=124864KB, bw=2081.4KB/s, iops=520 , runt= 60001msec
  read : io=124008KB, bw=2066.8KB/s, iops=516 , runt= 60002msec
  read : io=124068KB, bw=2067.8KB/s, iops=516 , runt= 60001msec
  write: io=3748.9MB, bw=63979KB/s, iops=15994 , runt= 60001msec


As you can see, read-performance is very bad! Writes are fine, but reads
are not acceptable!
By the way, for reads the parameter "direct=0" is set and I can see that
the reads are not even hitting the storage system!

I have then tried to remove the writers from the fio config files, and
the read-results are as we would expect it:

  read : io=1522.1MB, bw=25977KB/s, iops=6494 , runt= 60001msec
  read : io=1526.3MB, bw=26047KB/s, iops=6511 , runt= 60000msec
  read : io=1526.9MB, bw=26058KB/s, iops=6514 , runt= 60001msec
  read : io=990.98MB, bw=16912KB/s, iops=4227 , runt= 60001msec
  read : io=990.85MB, bw=16910KB/s, iops=4227 , runt= 60001msec
  read : io=1107.5MB, bw=18900KB/s, iops=4724 , runt= 60001msec
  read : io=1006.8MB, bw=17181KB/s, iops=4295 , runt= 60001msec
  read : io=1011.2MB, bw=17256KB/s, iops=4314 , runt= 60001msec
  read : io=1046.2MB, bw=17854KB/s, iops=4463 , runt= 60001msec
  read : io=987.33MB, bw=16850KB/s, iops=4212 , runt= 60001msec
  read : io=991.72MB, bw=16925KB/s, iops=4231 , runt= 60000msec
  read : io=1102.4MB, bw=18813KB/s, iops=4703 , runt= 60001msec
  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
  read : io=1012.1MB, bw=17287KB/s, iops=4321 , runt= 60001msec
  read : io=1128.7MB, bw=19252KB/s, iops=4813 , runt= 60001msec
  read : io=1186.1MB, bw=20257KB/s, iops=5064 , runt= 60001msec
  read : io=996.34MB, bw=17004KB/s, iops=4250 , runt= 60001msec
  read : io=1051.6MB, bw=17945KB/s, iops=4486 , runt= 60001msec
  read : io=1012.9MB, bw=17286KB/s, iops=4321 , runt= 60001msec
  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
  read : io=1125.5MB, bw=19208KB/s, iops=4801 , runt= 60001msec
  read : io=1125.3MB, bw=19204KB/s, iops=4800 , runt= 60001msec
  read : io=1005.2MB, bw=17155KB/s, iops=4288 , runt= 60001msec
  read : io=1016.7MB, bw=17351KB/s, iops=4337 , runt= 60001msec

We are on a RHEL 5.5 Box, 8 Gb FC. With another tool I receive about
180k 4k direct (!) random-read IOPS (with 64 threads to 4 LUNs).

We have also tried to use separate read- and write config files. But as
soon as we run the tests against the same LUN, read-performance drops!

Any feedback is highly appreciated!

Cheers,
Matt


--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux