On 10/12/14, 8:27 PM, "Jens Axboe" <axboe@xxxxxxxxx> wrote: >On 2014-10-12 14:28, Neto, Antonio Jose Rodrigues wrote: >> >> >> >> >>> On Oct 12, 2014, at 3:26 PM, Jens Axboe <axboe@xxxxxxxxx> wrote: >>> >>>> On 2014-10-12 13:12, Jens Axboe wrote: >>>>> On 2014-10-12 09:26, Neto, Antonio Jose Rodrigues wrote: >>>>> Just applied the patch and it's perfect. >>>>> >>>>> Please see below: >>>>> >>>>> Nossa Senhora:fiop neto$ ./fio --client 10.61.109.151 --remote-config >>>>> /root/fiop/iotest --client 10.61.109.152 --remote-config >>>>>/root/fio/iotest >>>>> hostname=s2, be=0, 64-bit, os=Linux, arch=x86-64, >>>>> fio=fio-2.1.13-59-gaa7bc, flags=1 >>>>> hostname=s1, be=0, 64-bit, os=Linux, arch=x86-64, >>>>> fio=fio-2.1.13-59-gaa7bc, flags=1 >>>>> <s2> fio: unable to open '/root/fio/iotest' job file >>>>> <s1> workload: (g=0): rw=write, bs=32K-32K/32K-32K/32K-32K, >>>>> ioengine=libaio, iodepth=4 >>>>> <s1> ... >>>>> <s1> Starting 64 threads >>>>> Jobs: 64 (f=1024): [W(64)] [43.3% done] [882.5M/0K/0K /s] [27.6K/0/0 >>>>> iops] >>>>> [eta 00m:34s] >>>> >>>> Great, at least that took care of that issue. As to missing output >>>>from >>>> one client, I've seen that here before, I will look into that. It's a >>>> separate issue. >>> >>> For the above one, s2 never started since it could not find the config >>>file you gave it. Have you seen missing final output for cases where >>>the jobs did all start? This particular one does not look valid. >>> >>> -- >>> Jens Axboe >>> >> >> >> Yes. I did. >> >> I ran using both servers but the output was showing the latest client - >>s2 > >Odd. Can you reproduce and send the output of such a run? > >-- >Jens Axboe Hi Jens, This is neto from Brazil How are you? I believe the issue could be formatting.... Also, when running from both clients, seem to me unified_rw_reporting is not working... I do not have the total for all clients.... Please see below: Config file for s1: [workload] bs=32k ioengine=libaio iodepth=4 size=160g numjobs=64 direct=1 runtime=60 file_service_type=random filename=/n1_11/f1:/n1_11/f2:/n1_11/f3:/n1_11/f4:/n1_11/f5:/n1_11/f6:/n1_11 /f7:/n1_11/f8:/n1_11/f9:/n1_11/f10:/n1_11/f11:/n1_11/f12:/n1_11/f13:/n1_11/ f14:/n1_11/f15:/n1_11/f16 rw=write thread unified_rw_reporting=1 group_reporting=1 Config file for s2: [workload] bs=32k ioengine=libaio iodepth=4 size=160g numjobs=64 direct=1 runtime=60 file_service_type=random filename=/n1_21/g1:/n1_21/g2:/n1_21/g3:/n1_21/g4:/n1_21/g5:/n1_21/g6:/n1_21 /g7:/n1_21/g8:/n1_21/g9:/n1_21/g10:/n1_21/g11:/n1_21/g12:/n1_21/g13:/n1_21/ g14:/n1_21/g15:/n1_21/g16 rw=write thread unified_rw_reporting=1 group_reporting=1 Client: --output (did not work) ./fio --client 10.61.109.151 --remote-config /root/fiop/iotest --client 10.61.109.152 --remote-config /root/fiop/iotest > multiple-clients Nossa Senhora:fiop neto$ cat multiple-clients hostname=s2, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-2.1.13-59-gaa7bc, flags=1 hostname=s1, be=0, 64-bit, os=Linux, arch=x86-64, fio=fio-2.1.13-59-gaa7bc, flags=1 <s2> workload: (g=0): rw=write, <s1> workload: (g=0): rw=write, bs=32K-32K/32K-32K/32K-32K, bs=32K-32K/32K-32K/32K-32K, ioengine=libaio, iodepth=4 ioengine=libaio, iodepth=4 <s2> ... <s1> ... <s2> Starting 64 threads <s1> Starting 64 threads <s2> 128 (f=2048): [W(64)] [100.0% done] [1481M/0K/0K /s] [46.3K/0/0 iops] [eta 00m:00s] workload: (groupid=0, jobs=64): err= 0: pid=13568: Mon Oct 13 09:26:32 2014 mixed: io=37370MB, bw=637546KB/s, iops=19923, runt= 60022msec slat (usec): min=9, max=811, avg=20.85, stdev= 9.34 clat (usec): min=2, max=585971, avg=12759.21, stdev=16114.41 lat (usec): min=303, max=585986, avg=12780.31, stdev=16114.37 clat percentiles (usec): | 1th=[ 572], 5th=[ 852], 10th=[ 1096], 20th=[ 2096], 30th=[ 6304], | 40th=[ 7136], 50th=[ 8160], 60th=[10816], 70th=[14016], 80th=[20096], | 90th=[29568], 95th=[36096], 99th=[50432], 100th=[71168], 100th=[218112], | 100th=[248832], 100th=[317440] bw (KB /s): min= 387, max=58816, per=1.57%, avg=10013.38, stdev=3302.95 lat (usec) : 4=0.01%, 250=0.01%, 500=0.35%, 750=2.88%, 1000=4.98% lat (msec) : 2=11.36%, 4=5.94%, 10=32.81%, 20=21.54%, 50=19.10% lat (msec) : 100=0.66%, 250=0.32%, 500=0.05%, 750=0.01% cpu : usr=0.41%, sys=0.61%, ctx=1322346, majf=0, minf=175 IO depths : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1195837/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=4 Run status group 0 (all jobs): MIXED: io=37370MB, aggrb=637545KB/s, minb=637545KB/s, maxb=637545KB/s, mint=60022msec, maxt=60022msec <s1> workload: (groupid=0, jobs=64): err= 0: pid=23378: Mon Oct 13 09:26:32 2014 mixed: io=39328MB, bw=670996KB/s, iops=20968, runt= 60018msec slat (usec): min=9, max=865, avg=19.48, stdev= 8.78 clat (usec): min=3, max=834759, avg=12122.18, stdev=16022.95 lat (usec): min=288, max=834776, avg=12141.92, stdev=16022.99 clat percentiles (usec): | 1th=[ 588], 5th=[ 860], 10th=[ 1112], 20th=[ 1848], 30th=[ 5920], | 40th=[ 6816], 50th=[ 7648], 60th=[10048], 70th=[13248], 80th=[18816], | 90th=[28544], 95th=[34560], 99th=[49408], 100th=[67072], 100th=[214016], | 100th=[246784], 100th=[374784] bw (KB /s): min= 62, max=53205, per=1.57%, avg=10546.28, stdev=2892.35 lat (usec) : 4=0.01%, 100=0.01%, 250=0.01%, 500=0.28%, 750=2.70% lat (usec) : 1000=5.02% lat (msec) : 2=13.21%, 4=6.35%, 10=32.46%, 20=21.33%, 50=17.70% lat (msec) : 100=0.60%, 250=0.30%, 500=0.05%, 750=0.01%, 1000=0.01% cpu : usr=0.39%, sys=0.61%, ctx=1347629, majf=0, minf=205 IO depths : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued : total=r=1258494/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0 latency : target=0, window=0, percentile=100.00%, depth=4 Run status group 0 (all jobs): MIXED: io=39328MB, aggrb=670995KB/s, minb=670995KB/s, maxb=670995KB/s, mint=60018msec, maxt=60018msec My suggestion for format .... 128 (f=2048): [W(64)] [100.0% done] [1481M/0K/0K /s] [46.3K/0/0 iops] [eta 00m:00s] Client <s2> workload: (groupid=0, jobs=64): err= 0: pid=13568: Mon Oct 13 09:26:32 2014 .... Client <s1> workload: (groupid=0, jobs=64): err= 0: pid=23378: Mon Oct 13 09:26:32 2014 ..... Total Clients = 2 Aggregate Workload: xxx MB/s yyyy IOPS zzzz latency -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html