Hi all, I'm experiencing really high CPU utilization with the refill_buffers option, presumably due to using rand() to generate all the data: Output with zero_buffers: zero_buffers: (g=0): rw=randwrite, bs=64K-64K/64K-64K, ioengine=psync, iodepth=1 ... zero_buffers: (g=0): rw=randwrite, bs=64K-64K/64K-64K, ioengine=psync, iodepth=1 zero_buffers: (groupid=0, jobs=32): err= 0: pid=21556 write: io=4600MB, bw=156966KB/s, iops=2452, runt= 30009msec clat (usec): min=378, max=139675, avg=13045.49, stdev=1468.67 bw (KB/s) : min= 2609, max= 6677, per=3.11%, avg=4886.17, stdev=120.46 cpu : usr=0.30%, sys=1.87%, ctx=2452182, majf=0, minf=11463 Output with refill_buffers: refill_buffers: (g=0): rw=randwrite, bs=64K-64K/64K-64K, ioengine=psync, iodepth=1 ... refill_buffers: (g=0): rw=randwrite, bs=64K-64K/64K-64K, ioengine=psync, iodepth=1 refill_buffers: (groupid=0, jobs=32): err= 0: pid=21503 write: io=4246MB, bw=144867KB/s, iops=2263, runt= 30010msec clat (usec): min=293, max=140908, avg=13969.29, stdev=1837.85 bw (KB/s) : min= 1187, max= 6843, per=3.13%, avg=4535.65, stdev=204.58 cpu : usr=37.76%, sys=1.63%, ctx=2286876, majf=0, minf=29750 While it is useful to write random data, the overhead is prohibitively expensive in high-throughput tests. Would it be a better option to allocate a large memory buffer, initialize it with random data, and use random offsets within the buffer for data to write to the disk? -- Bryan Veal <bryan.e.veal@xxxxxxxxx> NOT SPEAKING FOR INTEL -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html