Re: How to do strict synchronous i/o on Windows?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15/08/2012, Martin Steigerwald <Martin@xxxxxxxxxxxx> wrote:
> Am Mittwoch, 15. August 2012 schrieb Greg Sullivan:
>> On 15 August 2012 07:24, Martin Steigerwald <Martin@xxxxxxxxxxxx> wrote:
>> > Am Dienstag, 14. August 2012 schrieb Greg Sullivan:
>> > > On 15/08/2012, Martin Steigerwald <Martin@xxxxxxxxxxxx> wrote:
>> > > > Am Dienstag, 14. August 2012 schrieb Greg Sullivan:
>> > > >> On 15/08/2012, Martin Steigerwald <Martin@xxxxxxxxxxxx> wrote:
>> > > >> > Am Dienstag, 14. August 2012 schrieb Greg Sullivan:
>> > > >> >> On 15 August 2012 03:36, Martin Steigerwald
>> > > >> >> <Martin@xxxxxxxxxxxx>
>> > > >> >>
>> > > >> >> wrote:
>> > > >> >> > Hi Greg,
>> > > >> >
>> > > >> > […]
>> > > >> >
>> > > >> >> > Am Dienstag, 14. August 2012 schrieb Greg Sullivan:
>> > > >> >> >> On Aug 14, 2012 11:06 PM, "Jens Axboe" <axboe@xxxxxxxxx>
> wrote:
>> > > >> >> >> > On 08/14/2012 08:24 AM, Greg Sullivan wrote:
>> > […]
>> >
>> > > >> >> Is it possible to read from more than file in a single job,
>> > > >> >> in a round-robin fashion? I tried putting more than one file
>> > > >> >> in a single job, but it only opened one file. If you mean to
>> > > >> >> just do random reads in a single file - I've tried that, and
>> > > >> >> the throughput is unrealistically low. I suspect it's
>> > > >> >> because the read-ahead buffer cannot be effective for random
>> > > >> >> accesses.  Of course, reading sequentially from a single
>> > > >> >> file will result in a throughput that is far too high to
>> > > >> >> simulate the application.
>> > > >> >
>> > > >> > Have you tried
>> > > >> >
>> > > >> >        nrfiles=int
>> > > >> >
>> > > >> >               Number of files to use for this job.  Default:
>> > > >> >               1.
>> > > >> >
>> > > >> >        openfiles=int
>> > > >> >
>> > > >> >               Number of files to keep open at the same time.
>> > > >> >               Default: nrfiles.
>> > > >> >
>> > > >> >        file_service_type=str
>> > > >
>> > > > […]
>> > > >
>> > > >> > ? (see fio manpage).
>> > > >> >
>> > > >> > It seems to me that all you need is nrfiles. I´d bet that fio
>> > > >> > distributes
>> > > >> > the I/O size given among those files, but AFAIR there is
>> > > >> > something about
>> > > >> > that in fio documentation as well.
>> > > >> >
>> > > >> > Use the doc! ;)
>> > > >
>> > > > […]
>> > > >
>> > > >> Yes, I have tried all that, and it works, except that it causes
>> > > >> disk queuing, as I stated in my first post. I thought you meant
>> > > >> to put all the files into a single [job name] section of the
>> > > >> ini file, to enforce single threaded io.
>> > > >
>> > > > With just one job running at once?
>> > > >
>> > > > Can you post an example job file?
>> > > >
>> > > > Did you try the sync=1 / direct=1 suggestion from Bruce Chan?
>> > > >
>> > > > I only know the behaviour of fio on Linux where I/O depth of
>> > > > greater than one is only possible with libaio and direct=1. The
>> > > > manpage hints at I/O depth is one for all synchronous I/O
>> > > > engines, so I´d bet that refers to Windows as well.
>> > > >
>> > > > Other than that I have no idea.
>> >
>> > […]
>> >
>> > > One INI file, but a seperate [job name] section for each file, yes.
>> > > According to Jens, because each [job name] is a seperate thread,
>> > > and iodepth acts at the thread level, there will still be queuing
>> > > at the device level. If there were a way to do what I want I think
>> > > Jens would have told me, unfortunately.   ;)
>> > >
>> > > direct io does at least allow me to do cache-less reads though -
>> > > thankyou.
>> >
>> > My suggestion is to use one job with several files.
>> >
>> > martin@merkaba:/tmp> cat severalfiles.job
>> > [global]
>> > size=1G
>> > nrfiles=100
>> >
>> > [read]
>> > rw=read
>> >
>> > [write]
>> > stonewall
>> > rw=write
>> >
>> > (now these are two jobs, but stonewall lets the write job run after
>> > the read one with cache invalidation if not disabled and if
>> > supported by OS)
>> >
>> > martin@merkaba:/tmp> fio severalfiles.job
>> > read: (g=0): rw=read, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
>> > write: (g=1): rw=write, bs=4K-4K/4K-4K, ioengine=sync, iodepth=1
>> > 2.0.8
>> > Starting 2 processes
>> > read: Laying out IO file(s) (100 file(s) / 1023MB)
>> > write: Laying out IO file(s) (100 file(s) / 1023MB)
>> > Jobs: 1 (f=100)
>> > read: (groupid=0, jobs=1): err= 0: pid=23377
>> > [… lots of fast due to /tmp being a RAM-based filesystem – tmpfs …]
>> >
>> >
>> > martin@merkaba:/tmp> ls -lh read.1.* | head
>> > -rw-r--r-- 1 martin martin 11M Aug 14 23:15 read.1.0
>> > -rw-r--r-- 1 martin martin 11M Aug 14 23:15 read.1.1
> […]
>> > [… only first ten displayed …]
>> >
>> > martin@merkaba:/tmp> find -name "read.1*" 2>/dev/null | wc -l
>> > 100
>> >
>> > 100 files a 11M, due to rounding issues that may nicely add up to
>> > the one GiB.
>> >
>> > Raw sizes are:
>> >
>> > martin@merkaba:/tmp> ls -l read.1.* | head
>> > -rw-r--r-- 1 martin martin 10737418 Aug 14 23:20 read.1.0
>> > -rw-r--r-- 1 martin martin 10737418 Aug 14 23:20 read.1.1
> […]
>> > Note: When I used filename, fio just created one files regardless of
>> > the nrfiles setting. I would have expected it to use the filename as
>> > a prefix. There might be some way to have it do that.
>> >
>> > Ciao,
>>
>> Thanks - that runs, but it's still queuing. As I said before, I can't
>> use the sync engine - I receive an error. Is there a synchronous
>> engine available for Windows? Perhaps that's the only problem.
>> Can you check to see whether your system is queuing at the file
>> system/device level when you run that test?
>>
>> I had attempted to put the files in a single job earlier - I think it
>> may have been successfully accessing both files, but I didn't notice
>> it in the output. I'm a raw beginner.
>
> Did you try with
>
> ioengine=windowsaio
>
> +
>
> iodepth=1 (should be default however I think)
>
>
> Otherwise I have no idea. I never used fio on Windows so far.
>
> It might help when you try to explain exactly which problem you want to
> solve by the fio measurements. Multimedia streaming. Is it to slow? What is
>
> it why you want to do these measurements?
>

They are both defaults, and the output shows that both are being used.
If you could tell me whether your system is generating queuing it
would help, because if yours queues even when using the sync io
engine, it means I'm wasting my time and fio simply needs to be
augmented to support strict single threaded operation over multiple
files.

I am wanting to determine whether the application in question is
extracting a reasonable number of real time streams from any given
storage system.

Thanks,
Greg.
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux