Hello everyone, Recently we are running fio testing, starting with general job profile, like this one jerryxu@jerryxu fio_profile % cat direct_io [direct-io] rw=${MODE} bs=${BS} size=${SIZE} direct=1 ioengine=libaio iodepth=64 ramp_time=10 directory=/mnt/flash/fio_testing And run it like this, and repeat them. -bash-4.2# date; BS=4k SIZE=1g MODE=randrw fio /mnt/flash/fio_profile/direct_io | grep -A 3 "Run statu"; date Tue Sep 22 01:09:38 UTC 2020 Run status group 0 (all jobs): READ: io=524368KB, aggrb=185946KB/s, minb=185946KB/s, maxb=185946KB/s, mint=2820msec, maxt=2820msec WRITE: io=524208KB, aggrb=185889KB/s, minb=185889KB/s, maxb=185889KB/s, mint=2820msec, maxt=2820msec Tue Sep 22 01:11:00 UTC 2020 -bash-4.2# We noticed the following effect: (it’s nvme flash in this test, and system is idle) 1. Without file, after each run, we remove the file, then run it again. We noticed there is long time spent on creating the file. - Do we have estimate on time to create the file? - Each run the time is different for creating file - If we use allocate=none, will it impact performance of disk IO or not? If yes, then in what term? 2. With file created, (after first time run, don’t remove the file), we don’t have explicit file layout time now. - Each run the result is varied, this is idle machine, nothing else is run on it. Why is it different each time? - Also noticed that time from calling fio and exiting from fio is much longer than time spent on IO itself, for 1g/randrw, each run It took about 2-3 sec for IO, but whole time spent in fio itself is about 1 to 2 minutes, what’s the overhead here? Any estimate? (I suspect the same thing for above, but I can’t easily put time stamp after file layout. 3. For IO, we noticed some big difference so time, normally for 1g/randrw, it’s about 2 to 3 sec, all of a sudden, there could be a run with only 200 to 300 msec IOs. What could be the course? Write amplification? The test result is attached here. Thanks Jerry
Attachment:
result_w_o_file
Description: Binary data