I was digging into the bandwidth/iops logging code in stat.c after thinking about how to compute an average bandwidth/iops over a period of time given interval log entries. I noticed something concerning. When there is an interval where nothing happened (e.g. no read iops), nothing is logged but the time of last interval is advanced (there is only one such time for each log, shared by all directions). This makes sense for a case when you are only doing reads and don't want write entries in your log. But it means that looking at a log file can be pretty misleading. If you are logging with an interval of 100ms but only one read is occurring per second, you get a log for iops like this: 149, 6, 0, 0, 0 1056, 10, 0, 0, 0 2064, 9, 0, 0, 0 3072, 10, 0, 0, 0 4081, 10, 0, 0, 0 If you turn the interval up to every 10 ms, you get this: 149, 6, 0, 0, 0 1011, 100, 0, 0, 0 2011, 100, 0, 0, 0 3011, 100, 0, 0, 0 4011, 100, 0, 0, 0 Those zero entries are actually really important in a case like this. Those logs do not look like logs for a job that did 1 read per second. This is an extreme example to show the problem, and of course we know what's going on here. But what if you're doing a practical test on storage with a fairly small log interval, and the storage hiccups and processes no operations in an interval? It can be impossible to know if an apparent "gap" in a log file is because there was a zero entry or a system hiccup caused the logging process to run a little late. Any thoughts? Just logging the zero entries would work, but at the expense of noise in logs. (One option I can think of is remembering, per direction, that the last interval was a zero-entry one, and then writing that if a non-zero interval is later encountered. Also the last entry per direction must always be written, if any non-zero interval for that direction was ever written.) Thanks, Nick