On 2011-02-07 15:35, Jeff Moyer wrote: > Jens Axboe <jaxboe@xxxxxxxxxxxx> writes: > >> On 2011-02-04 20:21, Steven Pratt wrote: >>> I am trying to create a job file that randomly select a file form an imported list and reads the entire file sequentially. Them moves to the next file. I also want multiple jobs(processe) running the same workload. I have this: >>> >>> [global] >>> bs=4k >>> time_based=1 >>> runtime=15m >>> iodepth=4 >>> rw=read >>> ioengine=libaio >>> time_based=1 >>> ramp_time=600s >>> norandommap >>> >>> [job1] >>> opendir=/${FIO_MOUNT}/session1/small_file1 >>> file_service_type=sequential >>> numjobs=8 >>> >>> >>> >>> I used file_service_type=sequential because tought without it it would >>> only do a single read (block) from the file before switching to a >>> different file, which is not what I want. The issue with this test as >>> written is it seems like all the fio processes choose files in the >>> same order so I get way more cache hits than I want. I want this to be >>> more of a random file selection, but with reading whole file. Any >>> advice? >> >> file_service_type=random:<largenum> >> >> should do what you need, I think. If you ensure that <largenum> is >> sufficiently large that the file will always be finished before you run >> out, then that should work. > > Also note that with ioengine=libaio, you'll also want to specify direct > io (otherwise io_submit will block until the I/O is complete). If you > really want buffered, then you need to choose a different io engine. Good point. I used to have a warning for that, but dropped it since it got annoying when I was testing the buffered aio patches. Perhaps I should reinstate it. In any case, if the result is inspected, it'll be apparent that the queue depth was 1 throughout the run. -- Jens Axboe -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html