On 15 August 2012 03:36, Martin Steigerwald <Martin@xxxxxxxxxxxx> wrote: > Hi Greg, > > Please quote (http://learn.to/quote click "English") > > Am Dienstag, 14. August 2012 schrieb Greg Sullivan: > >> On Aug 14, 2012 11:06 PM, "Jens Axboe" <axboe@xxxxxxxxx> wrote: >> > >> > On 08/14/2012 08:24 AM, Greg Sullivan wrote: >> > > I need to simulate strict synchronous, round robin i/o to a group of >> > > files. I am on Windows 7 32-bit. >> > > fio is very nearly working, except that even with a queue depth of 1, >> > > it is still resulting in a disk queue that is > 1, because the >> > > "iodepth" parameter is not global - it is per thread. (correct?) >> > > >> > > I've tried using the "sync" engine, however that doesn't work at all - >> > > just spews out errors. >> > >> > That'll be the case for ANY platform and IO engine. If you have more >> > than 1 thread or process going, you can have > 1 depth at the device >> > side. The definition of a sync IO call is that the call doesn't return >> > until the IO is done. If you have overlapped calls due to more than 1 >> > thread, then that is no longer true. >> > >> > What you are looking for is outside the scope of an application. You >> > would have to limit the queue depth on the operating system side to >> > achieve that. Or artificially limit fio in some way, which would not >> > make a lot of sense imho. > >> Thanks Jens. I do in fact have an application that reads in exactly the >> manner I described. I have monitored the queue depth - it does not rise >> above 1. It is a real time musical sample streamer. >> >> Please consider this a new feature request for fio - thankyou. > > Is this application multithreaded? If so, are mutiple threads doing I/O > at the same time? If not I´d suggest just testing with one job. I don't know whether it is multithreaded or not. All I know is that it reads many files sequentially and in a round-robin fashion, without causing any disk queuing. Is it possible to read from more than file in a single job, in a round-robin fashion? I tried putting more than one file in a single job, but it only opened one file. If you mean to just do random reads in a single file - I've tried that, and the throughput is unrealistically low. I suspect it's because the read-ahead buffer cannot be effective for random accesses. Of course, reading sequentially from a single file will result in a throughput that is far too high to simulate the application. Reading a single file using the "skip" option would be a reasonable compromise, if there were a way to get it to loop back to the beginning of the file, and start at the next block into the file, such that the entire file is read. This would simulate reading sequentially from many files. (if it can be configured to do this, please advise) Greg. -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html