RE: random_generator=lfsr overhead with more disks?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A few comments:

-How does the cpu utilization look compared between the lfsr and default tausworthe generators?  Is the cpu util actually higher on lfsr?  If it is near saturated on 16 devices, then naturally throughput would decrease.

-It seems you are intentionally trying to write each block once.  So not using the norandommap parameter.

-Does a runtime of 2 minutes collect enough data points to assess the condition?  What happens if you extend the runtime?

Regards,
Jeff

-----Original Message-----
From: fio-owner@xxxxxxxxxxxxxxx [mailto:fio-owner@xxxxxxxxxxxxxxx] On Behalf Of Michael Green
Sent: Tuesday, March 13, 2018 12:24 PM
To: fio@xxxxxxxxxxxxxxx
Subject: random_generator=lfsr overhead with more disks?

Hello collective wisdom,

First post here, apologize if this has been asked before and would appreciate appropriate pointers or tips how to search the archives more effectively.

I work for E8 Storage and been testing our product with FIO for quite some time now. I’ve encountered this interesting affect that random_generator=lfsr seems to have on the results. Would appreciate your thoughts.

Here are two jobs. Both are 4k 100 RR, attacking 1 volume. The first job is with lfsr, the second without.

[root@sm28 csv]#  /opt/E8/bin/fio --name=global --thread=1 --direct=1 --group_reporting=1 --iomem_align=4k --name=PT7 --rw=randrw --rwmixread=100 --iodepth=40 --numjobs=8 --bs=4096 --size=450GiB --runtime=120 --filename='/dev/e8b0' --ioengine=libaio --numa_cpu_nodes=0 --random_generator=lfsr
PT7: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=40 ...
fio-2.2.11
Starting 8 threads
Jobs: 8 (f=8): [r(8)] [100.0% done] [3774MB/0KB/0KB /s] [966K/0/0 iops] [eta 00m:00s]
PT7: (groupid=0, jobs=8): err= 0: pid=25976: Tue Mar 13 20:56:48 2018
  read : io=447350MB, bw=3727.1MB/s, iops=954339, runt=120001msec
    slat (usec): min=1, max=540, avg= 3.81, stdev= 4.14
    clat (usec): min=46, max=4793, avg=330.67, stdev=76.39
     lat (usec): min=50, max=4838, avg=334.65, stdev=76.47



[root@sm28 csv]#  /opt/E8/bin/fio --name=global --thread=1 --direct=1 --group_reporting=1 --iomem_align=4k --name=PT7 --rw=randrw --rwmixread=100 --iodepth=40 --numjobs=8 --bs=4096 --size=450GiB --runtime=120 --filename='/dev/e8b0' --ioengine=libaio --numa_cpu_nodes=0
PT7: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=40 ...
fio-2.2.11
Starting 8 threads
Jobs: 8 (f=8): [r(8)] [100.0% done] [3729MB/0KB/0KB /s] [955K/0/0 iops] [eta 00m:00s]
PT7: (groupid=0, jobs=8): err= 0: pid=18605: Tue Mar 13 20:17:27 2018
  read : io=447243MB, bw=3726.2MB/s, iops=954110, runt=120001msec
    slat (usec): min=1, max=629, avg= 3.73, stdev= 4.08
    clat (usec): min=45, max=4327, avg=330.66, stdev=77.61
     lat (usec): min=47, max=4329, avg=334.55, stdev=77.65


Now the same with 16 volumes. Notice how latency went up and IOPS down with lfsr generator:

[root@sm28 csv]#  /opt/E8/bin/fio --name=global --thread=1 --direct=1 --group_reporting=1 --iomem_align=4k --name=PT7 --rw=randrw --rwmixread=100 --iodepth=40 --numjobs=8 --bs=4096 --size=450GiB --runtime=120 --filename='/dev/e8b0:/dev/e8b1:/dev/e8b2:/dev/e8b3:/dev/e8b4:/dev/e8b5:/dev/e8b6:/dev/e8b7:/dev/e8b8:/dev/e8b9:/dev/e8b10:/dev/e8b11:/dev/e8b12:/dev/e8b13:/dev/e8b14:/dev/e8b15' --ioengine=libaio --numa_cpu_nodes=0
PT7: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=40 ...
fio-2.2.11
Starting 8 threads
Jobs: 8 (f=0): [r(8)] [12.7% done] [3641MB/0KB/0KB /s] [932K/0/0 iops] [eta 13m:50s] s]
PT7: (groupid=0, jobs=8): err= 0: pid=27644: Tue Mar 13 21:04:24 2018
  read : io=443136MB, bw=3692.8MB/s, iops=945349, runt=120001msec
    slat (usec): min=1, max=1409, avg= 4.41, stdev= 6.90
    clat (usec): min=24, max=5222, avg=332.95, stdev=113.74
     lat (usec): min=38, max=5250, avg=337.53, stdev=113.11

[root@sm28 csv]#  /opt/E8/bin/fio --name=global --thread=1 --direct=1 --group_reporting=1 --iomem_align=4k --name=PT7 --rw=randrw --rwmixread=100 --iodepth=40 --numjobs=8 --bs=4096 --size=450GiB --runtime=120 --filename='/dev/e8b0:/dev/e8b1:/dev/e8b2:/dev/e8b3:/dev/e8b4:/dev/e8b5:/dev/e8b6:/dev/e8b7:/dev/e8b8:/dev/e8b9:/dev/e8b10:/dev/e8b11:/dev/e8b12:/dev/e8b13:/dev/e8b14:/dev/e8b15' --ioengine=libaio --numa_cpu_nodes=0 --random_generator=lfsr
PT7: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=40 ...
fio-2.2.11
Starting 8 threads
Jobs: 8 (f=128): [r(8)] [100.0% done] [3199MB/0KB/0KB /s] [819K/0/0 iops] [eta 00m:00s]
PT7: (groupid=0, jobs=8): err= 0: pid=29711: Tue Mar 13 21:16:13 2018
  read : io=382920MB, bw=3190.1MB/s, iops=816888, runt=120001msec
    slat (usec): min=1, max=1069, avg= 3.77, stdev= 4.33
    clat (usec): min=16, max=3593, avg=387.12, stdev=189.09
     lat (usec): min=36, max=3613, avg=391.06, stdev=189.05

Between 1 and 16, I’ve tried 4, 8 and 12 volumes. The performance gradually worsens with LFSR as the number of volumes increases, but stays flat without LFSR.
--
Thanks,
Michael Green--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html
��.n��������+%������w��{.n�������^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�

[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux