Hello collective wisdom, I'd like to run a parallel, a hundred threads per node write job on a cluster of tens of nodes where each thread writes to its own dir/file, that is each file in its own directory. The file system is a parallel, shared one. Assuming the file system is mounted under /fs_mount_point this is how I'd like it to look like: host1 |- /fs_mount_point/<host1_ip>/$jobnum/$filenum (i.e. /fs_mount_point/1.2.3.4/0/0) |- /fs_mount_point/<host1_ip>/$jobnum/$filenum (i.e. /fs_mount_point/1.2.3.4/1/0) |- /fs_mount_point/<host1_ip>/$jobnum/$filenum (i.e. /fs_mount_point/1.2.3.4/2/0) ... host2 |- /fs_mount_point/<host2_ip>/$jobnum/$filenum (i.e. /fs_mount_point/1.2.3.5/0/0) |- /fs_mount_point/<host2_ip>/$jobnum/$filenum (i.e. /fs_mount_point/1.2.3.5/1/0) |- /fs_mount_point/<host2_ip>/$jobnum/$filenum (i.e. /fs_mount_point/1.2.3.5/2/0) ... and so on I'm running fio daemon on all the client nodes and invoking jobs from the 1st node001 with something like fio --client=host_list fio.job --section=<section> the fio.job that I started with looked like this: [global] group_reporting ioengine=posixaio direct=1 directory=/mnt/fio filename_format=$jobnum.$filenum # Time options clocksource=gettimeofday runtime=30 time_based=1 ramp_time=5 iodepth=1 create_serialize=0 # Run jobs serially stonewall [lat_4k_rw] rw=randrw rwmixread=0 bs=4k size=1GB numjobs=120 This job file creates all the files in the same directory with names like: /mnt/fio/<host_ip1>.0.0 /mnt/fio/<host_ip1>.1.0 /mnt/fio/<host_ip1>.2.0 .. /mnt/fio/<host_ip2>.0.0 /mnt/fio/<host_ip2>.1.0 /mnt/fio/<host_ip2>.2.0 .. and so on To achieve the desired (or close to desired) outcome, I've unsuccessfully tried the following: 1. filename_format=$jobnum/$filenum 2. IP=<host_ip> fio --server --daemonize and then use ${IP} variable in the job file: filename_format=${IP}/$jobnum/$filenum but it seems that the variables specified in filename_format argument aren't really expanded on the remote daemon. Any suggestions are welcome. -- Michael