I would assume you have a loop-back HBA since you are not going to get Max Bandwidth with "Unknown" storage device in the back-end. Your best bet is with large transfer size like: bs=256k and high queue depth: iodepth=256 Make sure your HBA queue depth is not limited. Do the math again, you cannot get 1600 MB/s with a 8Gbps fiber in theory. --Alireza On Mon, Jan 18, 2016 at 7:04 PM, Thierry BERTAUD <tbertaud@xxxxxxxxxxxx> wrote: > David and Vasu, > >>>agree with vasu - assuming you're trying to hit numbers a vendor spec'd however on a flash drive of >>some sort. > >>>you're doing random writes which is also tough on some flash drive if they're into write >>amplification. try changing to random reads and you only need a couple of threads (with a depth of >>128 each) to generate enough iodepth to saturate most drives. 64 threads might be more threads >>than you have CPUs to run them so you're switching which also hurts performance > > I test against a SAN LUN (Infinidat F2000). > This my AIX LUN: > # lsattr -El hdisk6 > PCM PCM/friend/NFINIDATpcm Path Control Module False > PR_key_value Persistent Reserve Key value True > algorithm round_robin Algorithm True > hcheck_cmd test_unit_rdy Health Check Command True > hcheck_interval 60 Health Check Interval True > hcheck_mode nonactive Health Check Mode True > lun_id 0x1000000000000 Logical Unit Number ID False > lun_reset_spt yes LUN Reset Supported True > max_transfer 0x80000 Maximum TRANSFER Size True > node_name 0x5742b0f00004c700 FC Node Name False > pvid none Physical volume identifier False > q_type simple Queuing TYPE True > queue_depth 64 Queue DEPTH True > reserve_policy no_reserve Reserve Policy True > rw_timeout 30 READ/WRITE time out value True > scsi_id 0x1e0800 SCSI ID False > unique_id 3B1F742b0f0000004c700000000000001d609InfiniBox08NFINIDATfcp Unique device identifier False > ww_name 0x5742b0f00004c711 FC World Wide Name False > # > > With this conf: > # cat randwrite.fio > [global] > thread > numjobs=1 > iodepth=1 > group_reporting > bs=4k > norandommap=1 > refill_buffers > direct=1 > ioengine=posixaio > runtime=300 > time_based > filename=/dev/hdisk6 > log_avg_msec=1000 > [randread_32_128] > rw=randread > numjobs=32 > iodepth=128 > stonewall > # > > I have now bad resulsts: > IOPS: 2 800 > Bandwith:11 MB/s > latency : 6ms > > before with old configuration file: > IOPS: 42 000 > Bandwith:164 MB/s > latency : 0.2 ms > > Regards, > Thierry-- > To unsubscribe from this list: send the line "unsubscribe fio" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html