Re: not all the jobs running at equal speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It doesn't surprise me that run time increases but does that "make"
all the jobs have equal speed? If so, do you see unequal speeds when
you remove the stonewall from the numjobs setup? PS after checking
that you might want to investigate the offset_increment option
(http://fio.readthedocs.io/en/sphinx-doc/fio_doc.html#cmdoption-arg-offset_increment
).

On 1 February 2017 at 21:48, Prabhu Murugesan (pmurugesan)
<pmurugesan@xxxxxxxxxx> wrote:
> Actually, I wanted all the four jobs to run in parallel as opposed to running them in sequence. It is similar to the 'numjobs' concept. But, I wanted to limit the amount of data being written and the lba range, so that not more than one lba is being written.
>
> The way you suggested increases runtime from original 30 mins to 45 mins now.
>
> -----Original Message-----
> From: Sitsofe Wheeler [mailto:sitsofe@xxxxxxxxx]
> Sent: Wednesday, February 01, 2017 12:21 PM
> To: Prabhu Murugesan (pmurugesan) <pmurugesan@xxxxxxxxxx>
> Cc: fio@xxxxxxxxxxxxxxx
> Subject: Re: not all the jobs running at equal speed
>
> Looking a bit closer:
>
> The jobs in the first run have no stonewall so all 4 jobs will run simultaneously. The jobs in the second run have stonewall so at any given time only one of the jobs will be running. Does adding stonewall to the the global of the first run give similar results to the second run?
>
> On 1 February 2017 at 19:04, Sitsofe Wheeler <sitsofe@xxxxxxxxx> wrote:
>> Hi,
>>
>> On 1 February 2017 at 16:46, Prabhu Murugesan (pmurugesan)
>> <pmurugesan@xxxxxxxxxx> wrote:
>>>
>>> I am trying to run 4 jobs in parallel in order to fill a drive once
>>> randomly. The dut that I am using is a 450GB NVMe drive. The problem
>>> is, not all the jobs are running at equal speed. You can find iops
>>> log file data for all the jobs below
>>
>> The numjobs run is going to make all four jobs scribble over the
>> "same" range compared to your carefully crafted first run. What
>> happens if you set the offset to be 0 for all your jobs in your first
>> run and let the size be 100%?
>>
>> PS: it helps if you reduce the the number of parameters to the bare
>> minimum that still show the problem as it's less to look through.

-- 
Sitsofe | http://sucs.org/~sits/
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux