I was playing with this feature today. It seems like a nice feature as it will allow you to simulate a read or write of a somewhat fragmented file. From the docs, I can do something like --rw=randread:8 and expect fio will do 8 I/Os "before getting a new offset". So when fio does its reads, they will occur at offset1, offset1+block_size, offset1+block_size*2, ... offset1+block_size*7, and then at a new random offset2, followed by offset2+block_size, offset2+block_size*2, etc... What I'm seeing with this feature though is that using it causes the first offset chosen to always be 0, and it is used (plus multiples of block size) for the first 7 (nr-1) I/Os. For example, with this command: sudo fio --ioengine=libaio --direct=1 --output-format=json+ --name=job --filename=/dev/nvme1n1 --size=128MB --rw=randread:8 --iodepth=1 --bs=4K --write_bw_log=fio_0 --log_offset=1 The bandwidth log entries are (fifth column is the offset): 14, 288, 0, 4096, 0, 0 14, 36141, 0, 4096, 4096, 0 14, 34715, 0, 4096, 8192, 0 14, 34406, 0, 4096, 12288, 0 14, 34700, 0, 4096, 16384, 0 14, 27766, 0, 4096, 20480, 0 15, 34633, 0, 4096, 24576, 0 15, 20233, 0, 4096, 8093696, 0 15, 39409, 0, 4096, 8097792, 0 This seems odd, and kinda wrong - the first nr-1 ops do not happen from a random offset as specified by the randread/randwrite value, but instead from offset 0. When one does a random I/O operation without the :<nr> suffix, the first operation is from a random offset, not 0. I'm happy to find and fix the behavior, but wanted to first make sure it isn't intentional or that I'm somehow misunderstanding it.