On Thu, 13 Jan 2022 at 00:57, Gibson, Thomas <thomas.gibson@xxxxxxx> wrote: > > > My company builds SDWan network appliances and use SSD's and NVMe's for several key purposes. > > In order to qualify upcoming new disks for our systems, we typically run 2 hour test runs using fio, for the following: > > seqrd, seqwr, seqrw, seqrd_seqwr > randrd, randwr, randrw, randrd_randwr, randrd_seqwr > > We've noticed that single disk mode tests (ie seqrd, seqwr, etc) show high numbers than their counter parts in multiple disk mode tests (ie seqrd_seqwr). But we don't understand why. This may part normal, but we don't understand how testing functions to explain this. And if it's not normal, what factors might account for it. > > I've include a table of test data below. You'll notice, as an example, the seq read and seq write numbers are much high than the seq read part of seqrd_seqwr and even high than seqrw. > > I've also included a package of fio and test execution files in case that helps. > > Also prior to each test run, we do a prefill write to the disk and a clearing of the buffer cache, if that helps. > > FIO SSSTC_CVB-8D120_FW_CZJG801 > Seq Read 533MiB/s > Seq Write 317MiB/s > Seq Read/Write 138MiB/s & 138MiB/s ; why values are lower here? An example job from your tarball (included here because it's easier to read) ; Random Read,Sequential Write [global] ioengine=libaio direct=1 iodepth=16 randrepeat=0 bs=256000 time_based runtime=7200 log_avg_msec=500 [SSSTC_CVB-8D120_FW_CZJG801_2ndrun_r640-RandRd] filename=/dev/sde write_bw_log=SSSTC_CVB-8D120_FW_CZJG801_2ndrun_r640-randrd write_iops_log=SSSTC_CVB-8D120_FW_CZJG801_2ndrun_r640-randrd write_lat_log=SSSTC_CVB-8D120_FW_CZJG801_2ndrun_r640-randrd rw=randread [SSSTC_CVB-8D120_FW_CZJG801_2ndrun_r640-SeqWr] filename=/dev/sde write_bw_log=SSSTC_CVB-8D120_FW_CZJG801_2ndrun_r640-seqwr write_iops_log=SSSTC_CVB-8D120_FW_CZJG801_2ndrun_r640-seqwr write_lat_log=SSSTC_CVB-8D120_FW_CZJG801_2ndrun_r640-seqwr rw=write What does fio's summary output look like for that job? On Linux, fio prints disk and CPU utilisation information in its summary when the job finishes - what does it say? Alternatively take a look at the "iostat -xzh 1" output while the job is running and see what the disk utilisation is like. -- Sitsofe