On 2011-08-31 12:30, Jeff Moyer wrote: > Brian Fallik <bfallik@xxxxxxxxxxx> writes: > >> Hi, >> >> Apologies if this is documented somewhere else but I couldn't find it >> in the fio man page, example job files, or list archives. >> >> I'm exploring fio as a testing tool and it seems very well suited for >> my needs. I'm currently running experiments with N sequential writers >> all writing at 200k. The jobs file is very simple: >> [global] >> size=10m >> directory=. >> >> [foo1] >> rw=write >> rate=200k >> >> [foo2] >> ... >> fio creates various foo* files as part of its test but they all seem >> to contain the same content. I would have expected fio to generate >> random data in each file to avoid potential optimizations like >> deduplication. Am I missing the flag to generate random test >> patterns or is this behavior intentional? > > refill_buffers > If this option is given, fio will refill the IO buffers on every > submit. The default is to only fill it at init time and reuse > that data. Only makes sense if zero_buffers isn't specified, > naturally. If data verification is enabled, refill_buffers is > also automatically enabled. Yes. Fio does use random data by default, but for to avoid slowing down too much, it also defaults to reusing the same random data all the time. If you set the above option, you get fully fresh random data for every write, thus fully defeating any de-dupe/compression attempts on the target. -- Jens Axboe -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html