Increase your iodepth as I said earlier. You may ask your storage provider why read and writes behave like this. You can also increase numjob to push more IO with more fio process/threads. On Mon, Jan 18, 2016 at 7:34 PM, Thierry BERTAUD <tbertaud@xxxxxxxxxxxx> wrote: > Alireza, > > > I use 2 hba (fscsi1 -> fabric1, fscsi2 -> fabric2). Each fabric is connect > to 3 nodes. > > > # lspath | grep hdisk6 > Enabled hdisk6 fscsi1 > Enabled hdisk6 fscsi1 > Enabled hdisk6 fscsi3 > Enabled hdisk6 fscsi3 > Enabled hdisk6 fscsi1 > Enabled hdisk6 fscsi3 > # > > With : > > # cat randwrite.fio > [global] > thread > numjobs=1 > iodepth=1 > group_reporting > bs=256k > norandommap=1 > refill_buffers > direct=1 > ioengine=posixaio > runtime=300 > time_based > filename=/dev/hdisk6 > log_avg_msec=1000 > [randread_32_256] > rw=randread > numjobs=32 > iodepth=256 > stonewall > # > IOPS: 23 000 > > Bandwith: 85 Mo/s > > Latency: 0.5 ms > > > but with this one: > > # cat randwrite.fio > [global] > thread > numjobs=1 > iodepth=1 > group_reporting > bs=4k > norandommap=1 > refill_buffers > direct=1 > ioengine=posixaio > runtime=300 > time_based > filename=/dev/hdisk6 > log_avg_msec=1000 > [randread_64_64] > rw=randread > numjobs=64 > iodepth=64 > stonewall > # > IOPS: 10 000 > > Bandwith: 40 MB/S > > Latency: 1.5 ms > > > but write seems better than read: > > # cat randwrite.fio > [global] > thread > numjobs=1 > iodepth=1 > group_reporting > bs=4k > norandommap=1 > refill_buffers > direct=1 > ioengine=posixaio > runtime=300 > time_based > filename=/dev/hdisk6 > log_avg_msec=1000 > [randwrite_64_64] > rw=randwrite > numjobs=64 > iodepth=64 > stonewall > # > IOPS: 42 000 > > Bandwith: 163 MB/S > > latency: 0.2 ms > > > I don't understand why for last test randwrite is better than randread. > > I'm lost and don't undertand how to stress the LUN. > > > Regards, > Thierry > > > ________________________________ > De : Alireza Haghdoost <alireza@xxxxxxxxxx> > Envoyé : mardi 19 janvier 2016 02:06 > À : Thierry BERTAUD > Cc : David Nellans; fio@xxxxxxxxxxxxxxx > > Objet : Re: How to stress a 8Gbps card to get 500 000 IOPS or 1600 MB/s > > I would assume you have a loop-back HBA since you are not going to get Max > Bandwidth with "Unknown" storage device in the back-end. > Your best bet is with large transfer size like: > bs=256k > > and high queue depth: > iodepth=256 > > Make sure your HBA queue depth is not limited. > > Do the math again, you cannot get 1600 MB/s with a 8Gbps fiber in theory. > > --Alireza > > On Mon, Jan 18, 2016 at 6:29 PM, Thierry BERTAUD <tbertaud@xxxxxxxxxxxx> > wrote: >> >> David, >> >> Sorry for typo. >> I paste and not copy the configuration file. >> >> # cat randwrite.fio >> [global] >> thread >> numjobs=1 >> iodepth=1 >> group_reporting >> bs=4k >> norandommap=1 >> refill_buffers >> direct=1 >> ioengine=posixaio >> runtime=300 >> time_based >> filename=/dev/hdisk6 >> log_avg_msec=1000 >> [randwrite_64_64] >> rw=randwrite >> numjobs=64 >> iodepth=64 >> stonewall >> # >> >> >> Cordialement, >> Thierry Bertaud >> Tel: 01 60 95 51 41 >> >> PS: Merci de mettre en copie pour toute demande système, le groupe >> dsisystunix@xxxxxxxxxxxx >> >> ________________________________________ >> De : David Nellans <david@xxxxxxxxxxx> >> Envoyé : mardi 19 janvier 2016 01:15 >> À : Thierry BERTAUD >> Cc : fio@xxxxxxxxxxxxxxx >> Objet : Re: How to stress a 8Gbps card to get 500 000 IOPS or 1600 MB/s >> >> you have two typos in your config that will likely result in less than >> sufficient iodepth to hit 500k IOPS. check if those are in your actual >> config or not... >> >> > On Jan 18, 2016, at 5:58 PM, Thierry BERTAUD <tbertaud@xxxxxxxxxxxx> >> > wrote: >> > >> > Hello fio team, >> > >> > I tryed to stress my 8 Gbps hba with fio wit a lot of case: >> > I know that the hba card is capable of 500 000 IOPS or get data rate >> > neat that 8 Gbps (1600 MB/sec). >> > i tried to change block size, iodepth, numjobs, iodepth but idon't have >> > good resulut. (42000 IOPS and throughput 162 MB/s). >> > >> > Below fio conf: >> > [global] >> > thread >> > numjobs=1 >> > iodepth=1 >> > group_reporting >> > bs=4k >> > norandommap=1 >> > refill_buffers >> > direct=1 >> > ioengine=posixaio >> > runtime=300 >> > time_based >> > filename=/dev/hdisk6 >> > log_avg_msec=1000 >> > [randwrite_64_64] >> > rw=randwrite >> > numjobs=64 >> > ioepth=64 >> > stonewal >> > >> > >> > >> > Regards, >> > Thierry-- >> > To unsubscribe from this list: send the line "unsubscribe fio" in >> > the body of a message to majordomo@xxxxxxxxxxxxxxx >> > More majordomo info at http://vger.kernel.org/majordomo-info.html >> -- >> To unsubscribe from this list: send the line "unsubscribe fio" in >> the body of a message to majordomo@xxxxxxxxxxxxxxx >> More majordomo info at http://vger.kernel.org/majordomo-info.html > > -- To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html