RE: fio synchronization across client/server?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Have you tried making a job file for each client, then running both clients together with:

# fio --client=host1 job_a --client=host2 job_b --output=job_output

That should keep the clients synchronized.  Unfortunately it means you need a set of job files for each time you would normally stonewall.  Perhaps adding a stonewall-client feature would help.  Any time stonewall-client is seen, the client's jobs are paused until (all) the other client's jobfile also encounters a stonewall-client.   Then, a set of job files could be collapsed to one job file per client.

Regards,
Jeff

-----Original Message-----
From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On Behalf Of Jim Reuter
Sent: Friday, May 11, 2018 11:11 AM
To: fio@xxxxxxxxxxxxxxx
Subject: fio synchronization across client/server?

I am trying to figure out if something I want to do is possible with fio.  Large high performance storage systems often require load to be generated from multiple hosts over multiple interconnects to reach (and test) maximum performance.  But the fio documentation says nothing about how or if any synchronization is done between clients and servers (in the fio --client and --server meaning of these terms).

Let's begin with a trivial single node example:

[global]
bs=4k
iodepth=16
direct=1
ioengine=libaio
randrepeat=0
time_based
runtime=60

[rrsda]
rw=randread
filename=/dev/sda
[rrsdb]
rw=randread
filename=/dev/sdb

[rwsda]
stonewall
rw=randwrite
filename=/dev/sda
[rrsdb]
rw=randwrite
filename=/dev/sdb

The above will run concurrent random read workloads on /dev/sda and /dev/sdb, then switch to random write workloads on the same.  In my situation, with a multi-client external array storage, the two LUNs to test might be /dev/sda on host1 and /dev/sda on host2.  How do I achieve the above test but with the devices on different nodes using fio?  If I am reading the documentation correctly, using separate job files is an implicit stonewall, so I cannot just split the node specific parts from above into separate job files. 

I could split this into completely separate job files for the randread and randwrite parts and have one for each node, but then these just run independently, so I do not get the purpose of the client/server fio feature for this.  Or I could just launch separate fio jobs on each node and hope they start at roughly the same time, which is a hack and possibly highly variable, which can skew results.  What am I missing?  Can fio do this?  If not, what is the real purpose of the client/server feature?

--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux