Re: Sequential Writes and Random Reads in parallel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sitsofe,

Looks like percentage_random option would not work, as I would be
needing IOPs log data for sequential and random separate. Your
alternate option, with having separate jobs and controlling the ratio
using flow, works fine. But, I'm seeing a significant performance drop
when flow is introduced, although random and sequential ratio looks
fine. Below is the comparison table with values in IOPs.

Rnd=50%, Seq=50%
----------------------------
[global]
ioengine=libaio
                                                      9
disable_slat=1
disable_lat=1
thread
cpus_allowed=0-7
cpus_allowed_policy=split
direct=1
ramp_time=1m
runtime=5m
unified_rw_reporting=1
[seq_wr]
bs=128kb
ba=128kb
rw=write
iodepth=256
time_based=1
log_avg_msec=0
refill_buffers
write_bw_log=seq_wr
write_iops_log=seq_wr
write_lat_log=seq_wr
flow=-50
[rnd_rd]
bs=4kb
ba=4kb
rw=randread
iodepth=256
time_based=1
log_avg_msec=0
randrepeat=0
norandommap
refill_buffers
write_bw_log=rnd_rd
write_iops_log=rnd_rd
write_lat_log=rnd_rd
flow=50

Output:
Rnd - 1318 iops
Seq - 1318 iops

Rnd=100%
---------------
[global]
ioengine=libaio
disable_slat=1
disable_lat=1
thread
cpus_allowed=0-7
cpus_allowed_policy=split
direct=1
ramp_time=1m
runtime=5m
unified_rw_reporting=1
[rnd_rd]
bs=4kb
ba=4kb
rw=randread
iodepth=256
time_based=1
log_avg_msec=0
randrepeat=0
norandommap
refill_buffers
write_bw_log=rnd_rd
write_iops_log=rnd_rd
write_lat_log=rnd_rd

Output:
Rnd - 204131 iops

As you can see, with pure random, the IOPs is at 204131. And when flow
is set to 50%, iops drops down to 1318 for random. I would expect
something like 100k iops for random as that workload is set to
function for 50%. Is my assumption wrong?

Thanks,
Prabhu

On Fri, 26 Apr 2019 at 02:06, Sitsofe Wheeler <sitsofe@xxxxxxxxx> wrote:
>
> Hi,
>
> On Fri, 26 Apr 2019 at 06:34, Prabhakaran <prabhugce@xxxxxxxxx> wrote:
> >
> > Hi,
> >
> > I’m trying to do both Random and Sequential operations simultaneously
> > in one job. In specific, I would like to do Sequential Writes and
> > Random Reads in parallel. Here is what I have got so far.
> >
> > [global]
> > ioengine=libaio
> > disable_slat=1
> > disable_lat=1
> > thread
> > direct=1
> >
> > [job2]
> > stonewall
> > bs=128kb,4kb
> > ba=128kb,4kb
> > rw=randrw
> > iodepth=256
> > runtime=5s
> > time_based=1
> > log_avg_msec=0
> > randrepeat=0
> > norandommap
> > refill_buffers
> > write_bw_log=job2
> > write_iops_log=job2
> > write_lat_log=job2
> > bs_is_seq_rand=1
> > rwmixread=1
> > percentage_random=50
> >
> > I’m assuming the above job would do 128k Sequential and 4k Random (50%
> > each), writes and reads (with reads=1%) combined for both Sequential
> > and Random, for 5s.
> > I’m still not able to do just Sequential Writes and Random Reads alone
> > in parallel.
> >
> > Thanks,
> > Prabhu
>
> percentage_random can be split into reads/writes/trims with commas
> (see https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-percentage-random
> ) so that might help you out... Alternatively because you have a 50/50
> split you can use two separate jobs and tie them together with flow
> (https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-flow
> ).
>
> --
> Sitsofe | http://sucs.org/~sits/




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux