RE: strange discrepancy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On
> Behalf Of Pal, Laszlo

> Finally I've figured out how to use FIO :) now, I can measure several
> things, so I've started to get IOPS data for our hardware. The graphs
> looks good, however I can see a very strange discrepancy at seq_write
> part

> [global]
> ioengine=libaio
Without iodepth >1, you'll effectively be doing synchronous i/o, so there's no benefit to using libaio. If you're interested in throughput (not latency), set iodepth to 512, which will likely be greater than whatever is in the drive/controller/OS.

> buffered=0
Don't need this if you specify-
> direct=1


> bs=4k
> blocksize_range=1k-64k

AFAIK, blocksize_range will supersede bs. Also depending on the drive (what type?), this can cause mis-aligned writes which massively slow the overall process. I've mostly seen this with 4kn and 512e SSDs. You can examine that by trying blockalign=4k.

> size=2048m
I usually use a runtime spec instead of size, and also set fill_device=true.

[snip]

> 
> the results as the follows
(iops or MB/s?)
> Random Read -- 86
> Random Write -- 77
> Sequential Read -- 2170
> Sequential Write -- 45

> I have two questions again.
> The parameters above are good approach to determine IOPS for this system?
See above  :)

> What can be the reason of the huge difference between Seq_read and Seq_write?

Many things. First off, is this a block level test or is there a file system involved? What drives? What controller? RAID? 

z!

--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux