Hello,
Den 16-03-2010 09:05, skrev Jens Axboe:
I think what you are missing is that the random writes will create a
large sparse file. The Output is 10G, and you are doing a lot of reads.
So you could end up writing only 30% of the 10G, the rest would be
sparse holes in the file.
Yes, that part about the file sparseness was I had to grasp.
Expressed in another way: If I filter out the null-bytes from fio's
work-file (cat iometer.1.0 | tr -d '\0'), then the remainder doesn't
compress at all using gzip with default options.
So I'll stop worrying about storage-side compression/de-duplication when
interpreting fio's results.
Thanks.
--
Regards,
Troels Arvin <troels@xxxxxxxx>
http://troels.arvin.dk/
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html