Hello, On Mar 15, Jens Axboe wrote:
By default, fio will at init time randomly fill the buffer of the allocated IO units. If you are using the sync io engine, then only one buffer will be allocated and that will be repeatedly written. So yes, that'll compress very nicely.
I'm using libaio. So the data from that shouldn't compress all that much?
You can enable refill_buffers=1 and that'll cause fio to randomly fill it everytime it's submitted instead. That should effectively disable compression at the storage end.
Turning refill_buffers on or off doesn't seem to make much of a difference when compressing fil's work-file with "gzip -2" (201MB vs 198MB). But perhaps, "gzip -2" still compreses more than one can expect from a storage system?
I'm probably missing something. Here's the job description file I'm using (in this case only testing a small amount of I/O; when I was testing the storage systems, I used size=10g and six numjobs=6):
=========================================================== [global] description=Emulation of Intel IOmeter File Server Access Pattern [iometer] bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10 rw=randrw rwmixread=70 direct=1 size=1g ioengine=libaio iodepth=256 refill_buffers=0 =========================================================== -- Regards, Troels Arvin <troels@xxxxxxxx> http://troels.arvin.dk/ -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html