help constructing job file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

(newbie alert)

I am struggling with how to model a particular workload:
 - there are two groups of processes (8 writers and 8 readers),
   that run concurrently, writing to a single RAID unit

 - each of the writers
   - opens 36 files
   - appends to the files in 1M chunks until they reach 100G
   - closes those files and opens a new set

 - each of the readers
   - reads a group of 36 contemporaneous files from beginning to end
   - moves on to the next group

My goal is not to maximise numbers from fio tests but rather
to model the system and how it is likely to scale up or down.
In particular I want to observe the system behaviour over an
extended period (up to 24h) with readers & writers hammering away.

Below is my initial sketch which isn't right; in my tests (with
smaller filesize, numjobs, nrfiles) it runs the file layout fine
but I never seem to get a prolonged period of read operations
in parallel with writes.

[global]
direct=0
fsync=0
bs=4k
size=1M
ioengine=posixaio
rwmixread=50
rwmixwrite=50
group_reporting=1
per_job_logs=0
filesize=100G
file_append=1
invalidate=1
refill_buffers=1

[writer]
name=writer
readwrite=write
numjobs=8
nrfiles=36
file_service_type=roundrobin

[reader]
name=reader
readwrite=read
numjobs=8
nrfiles=36
file_service_type=roundrobin

Any pointers would be most welcome.
Vince
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux