Re: Measuring IOPS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am Mittwoch, 3. August 2011 schrieb Jeff Moyer:
> Martin Steigerwald <Martin@xxxxxxxxxxxx> writes:
> > - ioengine=libaio
> > - direct=1
> > - and then due to direct I/O alignment requirement: bsrange=2k-16k
> > 
> > So I now also fully understand that ioengine=sync just refers to the
> > synchronous nature of the system calls used, not on whether the I/Os
> > are issued synchronously via sync=1 or by circumventing the page
> > cache via direct=1
> > 
> > Attached are results that bring down IOPS on read drastically! I
> > first let sequentiell.job write out the complete 2 gb with random
> > data and then ran the iops.job.
> 
> If you want to measure the maximum iops, then you should consider
> driving iodepths > 1.  Assuming you are testing a sata ssd, try using a
> depth of 64 (twice the NCQ depth).

Yes, I thought about that too, but then also read about the 
"recommendation" to use an iodepth of one in a post here:

http://www.spinics.net/lists/fio/msg00502.html

What will be used in regular workloads - say Linux desktop on an SSD here? 
I would bet that Linux uses what it can get? What about server workloads 
like mail processing on SAS disks or fileserver on SATA disks and such 
like?


Twice of

merkaba:~> hdparm -I /dev/sda | grep -i queue
        Queue depth: 32
           *    Native Command Queueing (NCQ)

?

Why twice?

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux