On Mon, May 23, 2016 at 9:40 AM, Alireza Haghdoost <alireza@xxxxxxxxxx> wrote: > On Mon, May 23, 2016 at 9:24 AM, Tim Walker <tim.t.walker@xxxxxxxxxxx> wrote: >> >> Hello- >> >> I need to do random writes/reads to some thin-provisioned block >> devices. Writes are no problem, but I have to make sure the reads >> come from blocks that have already been written (or else the device >> synthesizes fill data). Pre-filling the device is not desirable since >> they are 8-12 TiB, plus that isn't the way they are really used. >> >> IO at the block level is our customer's requirement, but I'd also be >> interested in the same concept at the file level, where fio randomly >> reads from files that it has randomly written. >> >> I'm sure I'm not the first person who has come up against this, but >> I've searched/Googled the best I can and have come up empty. Can >> somebody point me to the correct switches to force reads to be >> randomly selected from the blocks that have been written by that test >> sequence? >> >> Thanks in advance. >> >> Best regards, >> -Tim > > Tim, > > If you don't want to mix read and write workload, could you just write > randomly at your block device with one job first, then use the same > random seed in the next job to read those random blocks ? > > --Alireza Or you could fire two Jobs simultaneously with the same random seed. First job only writes with large queue depth and the second job only reads with small queue depth. That would result in a mixed workload where the writer writes on random blocks with a faster speed than the reader job. -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html