Hi, I strongly recommend you reduce the options in your configuration such that you still see the problem and then update us with the configuration that still shows the problem. Something else to look at is adjusting your size and offset parameters to be multiples of your blocksize: bs=4k ba=4k [...] size=112524539904 [...] offset=112524539904 112524539904 % 4096 doesn't equal 0 so you are potentially forcing fio to do extra work to keep realigning your I/O (you have ba=4k in there which ideally wouldn't be if the I/Os were to naturally align). Note that 0 % 4096 and 225049079808 % 4096 do equal 0 so there's less alignment fixing taking place there... On 1 February 2017 at 23:05, Prabhu Murugesan (pmurugesan) <pmurugesan@xxxxxxxxxx> wrote: > No, with or without stonewall, I am seeing unequal speeds. Even with 'offset_increment', I am seeing the same. It exactly behaves the same way as my very first configuration with offset. > > -----Original Message----- > From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx] > Sent: Wednesday, February 01, 2017 3:13 PM > To: Prabhu Murugesan (pmurugesan) <pmurugesan@xxxxxxxxxx> > Cc: fio@xxxxxxxxxxxxxxx > Subject: Re: not all the jobs running at equal speed > > It doesn't surprise me that run time increases but does that "make" > all the jobs have equal speed? If so, do you see unequal speeds when you remove the stonewall from the numjobs setup? PS after checking that you might want to investigate the offset_increment option (http://fio.readthedocs.io/en/sphinx-doc/fio_doc.html#cmdoption-arg-offset_increment > ). > > On 1 February 2017 at 21:48, Prabhu Murugesan (pmurugesan) <pmurugesan@xxxxxxxxxx> wrote: >> Actually, I wanted all the four jobs to run in parallel as opposed to running them in sequence. It is similar to the 'numjobs' concept. But, I wanted to limit the amount of data being written and the lba range, so that not more than one lba is being written. >> >> The way you suggested increases runtime from original 30 mins to 45 mins now. >> >> -----Original Message----- >> From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx] >> Sent: Wednesday, February 01, 2017 12:21 PM >> To: Prabhu Murugesan (pmurugesan) <pmurugesan@xxxxxxxxxx> >> Cc: fio@xxxxxxxxxxxxxxx >> Subject: Re: not all the jobs running at equal speed >> >> Looking a bit closer: >> >> The jobs in the first run have no stonewall so all 4 jobs will run simultaneously. The jobs in the second run have stonewall so at any given time only one of the jobs will be running. Does adding stonewall to the the global of the first run give similar results to the second run? >> >> On 1 February 2017 at 19:04, Sitsofe Wheeler <sitsofe@xxxxxxxxx> wrote: >>> Hi, >>> >>> On 1 February 2017 at 16:46, Prabhu Murugesan (pmurugesan) >>> <pmurugesan@xxxxxxxxxx> wrote: >>>> >>>> I am trying to run 4 jobs in parallel in order to fill a drive once >>>> randomly. The dut that I am using is a 450GB NVMe drive. The problem >>>> is, not all the jobs are running at equal speed. You can find iops >>>> log file data for all the jobs below >>> >>> The numjobs run is going to make all four jobs scribble over the >>> "same" range compared to your carefully crafted first run. What >>> happens if you set the offset to be 0 for all your jobs in your first >>> run and let the size be 100%? >>> >>> PS: it helps if you reduce the the number of parameters to the bare >>> minimum that still show the problem as it's less to look through. > > -- > Sitsofe | http://sucs.org/~sits/ -- Sitsofe | http://sucs.org/~sits/ -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html