Re: Sequential Writes and Random Reads in parallel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sitsofe,

Yes, as you said the jobs are tethered running at the slowest job's
speed. How can I make one job run faster and the other one slower?

Thanks,
Prabhu

On Thu, 9 May 2019 at 00:33, Sitsofe Wheeler <sitsofe@xxxxxxxxx> wrote:
>
> On Mon, 6 May 2019 at 19:54, Prabhakaran <prabhugce@xxxxxxxxx> wrote:
> >
> > Looks like percentage_random option would not work, as I would be
> > needing IOPs log data for sequential and random separate. Your
> > alternate option, with having separate jobs and controlling the ratio
> > using flow, works fine. But, I'm seeing a significant performance drop
> > when flow is introduced, although random and sequential ratio looks
> > fine. Below is the comparison table with values in IOPs.
>
> <snip>
>
> > As you can see, with pure random, the IOPs is at 204131. And when flow
> > is set to 50%, iops drops down to 1318 for random. I would expect
> > something like 100k iops for random as that workload is set to
> > function for 50%. Is my assumption wrong?
>
> Are you sure your writes are as fast as your reads? When you use flow
> the jobs are tethered so ultimately you will go at the speed of the
> slowest job...
>
> --
> Sitsofe | http://sucs.org/~sits/



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux