Re: help understanding the output of fio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Felix,

Based on your previous emails about the drive it would appear that the
hardware (ssd, cables, port) are fine and the drive performs.

Go back and run your original ZFS test on your mounted ZFS volume
directory and remove the "--direct=1" from your command as ZFS does
not yet support direct_io and disabling buffered io to the zfs
directory will have very negative impacts. This is a ZFS thing, not
your kernel or hardware.

--Jeff

On Thu, Apr 4, 2024 at 12:00 PM Felix Rubio <felix@xxxxxxxxx> wrote:
>
> hey Jeff,
>
> Good catch! I have run the following command:
>
> fio --name=seqread --numjobs=1 --time_based --runtime=60s --ramp_time=2s
> --iodepth=8 --ioengine=libaio --direct=1 --verify=0 --group_reporting=1
> --bs=1M --rw=read --size=1G --filename=/dev/sda
>
> (/sda and /sdd are the drives I have on one of the pools), and this is
> what I get:
>
> seqread: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB,
> (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=8
> fio-3.33
> Starting 1 process
> Jobs: 1 (f=1): [R(1)][100.0%][r=388MiB/s][r=388 IOPS][eta 00m:00s]
> seqread: (groupid=0, jobs=1): err= 0: pid=2368687: Thu Apr  4 20:56:06
> 2024
>    read: IOPS=382, BW=383MiB/s (401MB/s)(22.4GiB/60020msec)
>      slat (usec): min=17, max=3098, avg=68.94, stdev=46.04
>      clat (msec): min=14, max=367, avg=20.84, stdev= 6.61
>       lat (msec): min=15, max=367, avg=20.91, stdev= 6.61
>      clat percentiles (msec):
>       |  1.00th=[   21],  5.00th=[   21], 10.00th=[   21], 20.00th=[
> 21],
>       | 30.00th=[   21], 40.00th=[   21], 50.00th=[   21], 60.00th=[
> 21],
>       | 70.00th=[   21], 80.00th=[   21], 90.00th=[   21], 95.00th=[
> 21],
>       | 99.00th=[   25], 99.50th=[   31], 99.90th=[   48], 99.95th=[
> 50],
>       | 99.99th=[  368]
>     bw (  KiB/s): min=215040, max=399360, per=100.00%, avg=392047.06,
> stdev=19902.89, samples=120
>     iops        : min=  210, max=  390, avg=382.55, stdev=19.43,
> samples=120
>    lat (msec)   : 20=0.19%, 50=99.80%, 100=0.01%, 500=0.03%
>    cpu          : usr=0.39%, sys=1.93%, ctx=45947, majf=0, minf=37
>    IO depths    : 1=0.0%, 2=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%,
>  >=64=0.0%
>       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>  >=64=0.0%
>       complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%,
>  >=64=0.0%
>       issued rwts: total=22954,0,0,0 short=0,0,0,0 dropped=0,0,0,0
>       latency   : target=0, window=0, percentile=100.00%, depth=8
>
> Run status group 0 (all jobs):
>     READ: bw=383MiB/s (401MB/s), 383MiB/s-383MiB/s (401MB/s-401MB/s),
> io=22.4GiB (24.1GB), run=60020-60020msec
>
> Disk stats (read/write):
>    sda: ios=23817/315, merge=0/0, ticks=549704/132687, in_queue=683613,
> util=99.93%
>
> 400 MBps!!! This is a number I have never experienced. I understand this
> means I need to go back to the openzfs chat/forum?
>
> ---
> Felix Rubio
> "Don't believe what you're told. Double check."
>


-- 
------------------------------
Jeff Johnson
Co-Founder
Aeon Computing

jeff.johnson@xxxxxxxxxxxxxxxxx
www.aeoncomputing.com
t: 858-412-3810 x1001   f: 858-412-3845
m: 619-204-9061

4170 Morena Boulevard, Suite C - San Diego, CA 92117

High-Performance Computing / Lustre Filesystems / Scale-out Storage





[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux