Hi Steve, Have you tried to say 100G for the file size, and see what happens instead of 1G, this would be the first toggle once you clean up the parameters. Remember high end storage could be using DRAM in the hardware even though you have the direct flag set. I have no idea what type of storage system you are using. Intel has a free guide on how we test Optane SSDs if you want it, I can send it to you. It will help with the clean up and things in Linux that matter for testing high end SSDs, but that's a guide and people often don't spend time to read 25 page docs. Inline I have added our basic job that supports some basic testing that often gets shown on an OEMs data sheet. Passing a jobfile to fio tends to be a bit less cumbersome too. You might wish to ignore cpus_allowed we do this for NUMA issues and getting very exact data. It will and does improve latencies :). Especially something as sensitive as an Optane SSD. #fio <jobfilename> <start of a job file> [global] direct=1 filename=/dev/nvme0n1 log_avg_msec=500 time_based ioengine=libaio percentile_list=1:5:10:20:30:40:50:60:70:80:90:95:99:99.5:99.9:99.95:99.99:99.999:99.9999 [seq-write-64k-qd256] runtime=60 bs=64k iodepth=256 numjobs=1 cpus_allowed=0 ioengine=libaio rw=write stonewall [seq-read-64k-qd256] runtime=60 bs=64k iodepth=256 numjobs=1 cpus_allowed=0 ioengine=libaio rw=read stonewall [rand-write-4k-qd1] runtime=60 bs=4K iodepth=1 numjobs=1 cpus_allowed=0 ioengine=pvsync2 hipri rw=randwrite stonewall [rand-read-4k-qd1] runtime=60 bs=4K iodepth=1 numjobs=1 cpus_allowed=0 ioengine=pvsync2 hipri rw=randread stonewall [rand-read-write-4k-qd128] runtime=60 bs=4K iodepth=32 numjobs=4 cpus_allowed=0,1,2,3 rwmixread=70 rwmixwrite=30 ioengine=libaio rw=randrw group_reporting stonewall <end of a jobfile> -----Original Message----- From: Steve King <stkissteve@xxxxxxxxx> Sent: Wednesday, April 14, 2021 6:39 AM To: fio@xxxxxxxxxxxxxxx Subject: [QUESTION] Incorrect arguments for FIO job? Hi folks - could use some help as I'm not a fio expert.... Short version is an external team is presenting results to me that I don't entirely believe. The resulting IOPS & Throughput for 'random read' and 'random write' are exactly the same, which makes no logical sense. I'm concerned they have a problem with the arguments used in FIO; especially as I notice a typo. Here is what's being run: fio --randrepeat=1 --ioengine=libaio --direct=1 --sync=1 --name=fio-test --filename=random_50read_50write-.fio --overwrite=1 --iodepth=64 --size=1GB --readrite=randrw --rwmixread=50 --rwmixwrite=50 --bs=16k --runtime=60 --time_based --ramp_time=15 --percentage_random=50 Note typo "readRite...." But a lot of these arguments seem redundant. Thanks!