RE: running jobs serially

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2022-05-18 22:41:24, Vincent Fu wrote:
> The jobs you are running have the *stonewall* option which should make them run
> serially unless something is very broken.

Yeah, so that's something I added deliberately for that purpose, but two
things make me think it's not working properly.

 1. the timestamps are identical for the two jobs

        randwrite-4k-4g-1x: (groupid=1, jobs=1): err= 0: pid=1033477: Wed May 18 15:41:04 2022
         randread-4k-4g-1x: (groupid=0, jobs=1): err= 0: pid=1033470: Wed May 18 15:41:04 2022

 2. when fio starts, it says:

         Starting 2 processes

    i would have expected it to start one process at a time

 3. when running larger batches, it starts laying out all files before
    starting the jobs:

$ fio ars.fio
randread-4k-4g-1x: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=1
randwrite-4k-4g-1x: (g=1): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=posixaio, iodepth=1
randread-64k-256m-16x: (g=2): rw=randread, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=16
...
randwrite-64k-256m-16x: (g=3): rw=randwrite, bs=(R) 64.0KiB-64.0KiB, (W) 64.0KiB-64.0KiB, (T) 64.0KiB-64.0KiB, ioengine=posixaio, iodepth=16
...
randread-1m-16g-1x: (g=4): rw=randread, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
randwrite-1m-16g-1x: (g=5): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=posixaio, iodepth=1
fio-3.25
Starting 36 processes
randread-4k-4g-1x: Laying out IO file (1 file / 4096MiB)
randwrite-4k-4g-1x: Laying out IO file (1 file / 4096MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randwrite-64k-256m-16x: Laying out IO file (1 file / 256MiB)
randread-1m-16g-1x: Laying out IO file (1 file / 16384MiB)
[...]

I would have expected those files to be "laid" right before each job
starts, not all at once, in the beginning, although I'm not sure what
difference that would make. Maybe it would save disk space, at least?
Say if I have limited space left on the partition and I want to run
multiple large jobs, I'd expect each job to collect after itself..

> Here is documentation for the stonewall option:
>
> https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-stonewall

Speaking of which, it's not clear to me if I need to add stonewall to
each job or if I can just add it to the top-level global options and be
done with it...

> You could add the write_bw_log=filename and log_unix_epoch=1 options to
> confirm. You should see a timestamp for each IO and should be able to make
> sure that all the writes are happening after the reads.

So I tried this, and it's a little hard to figure out the output. But
looking at:

    head -1 $(ls *bw*.log -v)

it does look like the first line is incrementing and tests are not run
in parallel.

So maybe the bug is *just* 1 and 2: (1) the timestamps in the final
report are incorrect, and (2) processes are all started at once (and 1
may be related to 2!)

Does that make sense?

Thanks for the quick response!

-- 
Antoine Beaupré
torproject.org system administration



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux