On 3/10/22 12:47 PM, Nick Neumann wrote: > I had switched to doing per_unit (every sample) logging recently in > order to work around some issues I was hitting with using > --log_avg_msec. > > After looking at my results, I could see a significant decrease in > maximum and average bandwidth during runs. So I started playing with > disabling other aspects of logging and was really surprised by the > effect, running a 60 second random write test after an nvme format and > 10 minutes of idle. I repeated the experiment many times to make sure > it wasn't just noise. > > With a 970 Evo 1TB, I saw the following results: > --log_avg_msec=100, all latency/latency percentile options disabled: > max bandwidth 1.04GB/s, avg for run 0.99GB/s > --log_avg_msec=100, all latency/latency percentile options enabled: > max bandwidth 1.00GB/s, avg for run 0.94GB/s > no log_avg_msec, all latency/latency percentile options enabled: > max bandwidth 0.95GB/s, avg for run 0.91GB/s > > Between the best and worst performance is on the order of 100MB/s of > bandwidth. Is this surprising at all, or expected with fio? It's certainly not unexpected. The higher the granularity of logging, the higher the cost. This is particularly evident if you're using smaller block sizes, and hence higher IOPS. -- Jens Axboe