Hello!
I'm in the process of benchmarking a cloud, and have written scripts
that capture FIO output from X runs (using the --minimal option).
I run 4 different tests, for seq read/write and random read/write. I've
found that the write output is in field $25 and the read output is in
$6, but I'm struggling with min, max and stddev of the latency (clat).
Also, I've read that it changes from usec to msec where it's convenient.
Any tips as to getting out the values in msec always?
Thanks
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html