RE: fio 3.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, thanks Jens and Elliott

I fixed that.

Now with /dev/dax0.0 and mmap ioengine slat is gone.  But if fio maps entire /dev/dax0.0 device once and the using CPU load/store, why clat number's so high, 6us for read, 12us for write ??

dl560g10spmem01:/var/work/fio # taskset -c 0-19 /usr/local/bin/fio --filename=/dev/dax0.0 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=mmap --bssplit=4k/4:8k/7:16k/7:32k/15:64k/65:128k/1:256k/1 --rwmixread=5 --iodepth=1 --numjobs=16 --runtime=1800 --group_reporting --name=4-rand-rw-3xx --size=290g
4-rand-rw-3xx: (g=0): rw=randrw, bs=(R) 4096B-256KiB, (W) 4096B-256KiB, (T) 4096B-256KiB, ioengine=mmap, iodepth=1
...
fio-3.2-53-ga7d0-dirty
Starting 16 processes
Jobs: 1 (f=1): [_(5),m(1),_(10)][100.0%][r=1142MiB/s,w=21.2GiB/s][r=22.4k,w=428k IOPS][eta 00m:00s]
4-rand-rw-3xx: (groupid=0, jobs=16): err= 0: pid=67241: Thu Nov 30 23:04:51 2017
   read: IOPS=37.0k, BW=1931MiB/s (2025MB/s)(232GiB/123040msec)
    clat (nsec): min=197, max=753983, avg=6709.21, stdev=4157.00
     lat (nsec): min=218, max=754005, avg=6740.89, stdev=4157.15
    clat percentiles (nsec):
     |  1.00th=[  572],  5.00th=[  996], 10.00th=[ 1528], 20.00th=[ 3568],
     | 30.00th=[ 4832], 40.00th=[ 6880], 50.00th=[ 7328], 60.00th=[ 7712],
     | 70.00th=[ 8096], 80.00th=[ 8512], 90.00th=[ 9536], 95.00th=[10688],
     | 99.00th=[25984], 99.50th=[31360], 99.90th=[40192], 99.95th=[45824],
     | 99.99th=[54016]
   bw (  KiB/s): min=110400, max=148800, per=6.28%, avg=124171.80, stdev=4366.58, samples=3910
   iops        : min= 2114, max= 2842, avg=2384.24, stdev=74.15, samples=3910
  write: IOPS=721k, BW=35.8GiB/s (38.5GB/s)(4408GiB/123040msec)
    clat (nsec): min=72, max=814061, avg=12174.57, stdev=7463.58
     lat (nsec): min=91, max=814132, avg=12221.87, stdev=7475.72
    clat percentiles (nsec):
     |  1.00th=[  684],  5.00th=[ 1400], 10.00th=[ 2064], 20.00th=[ 6304],
     | 30.00th=[ 8384], 40.00th=[12992], 50.00th=[13760], 60.00th=[14400],
     | 70.00th=[15040], 80.00th=[15936], 90.00th=[17024], 95.00th=[18048],
     | 99.00th=[46336], 99.50th=[59648], 99.90th=[63232], 99.95th=[64256],
     | 99.99th=[71168]
   bw (  MiB/s): min= 2197, max= 2647, per=6.28%, avg=2303.69, stdev=20.83, samples=3910
   iops        : min=43091, max=52348, avg=45290.89, stdev=447.39, samples=3910
  lat (nsec)   : 100=0.01%, 250=0.01%, 500=0.01%, 750=1.99%, 1000=1.53%
  lat (usec)   : 2=6.41%, 4=6.96%, 10=18.78%, 20=61.82%, 50=1.56%
  lat (usec)   : 100=0.95%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
  cpu          : usr=99.80%, sys=0.18%, ctx=4735, majf=0, minf=2519628
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=4671537,88741744,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=1931MiB/s (2025MB/s), 1931MiB/s-1931MiB/s (2025MB/s-2025MB/s), io=232GiB (249GB), run=123040-123040msec
  WRITE: bw=35.8GiB/s (38.5GB/s), 35.8GiB/s-35.8GiB/s (38.5GB/s-38.5GB/s), io=4408GiB (4733GB), run=123040-123040msec

Fio numa placement doesn't work, when I remove taskset and add numa_cpu_nodes

dl560g10spmem01:/var/work/fio #
dl560g10spmem01:/var/work/fio # /usr/local/bin/fio --filename=/dev/dax0.0 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=mmap --bssplit=4k/4:8k/7:16k/7:32k/15:64k/65:128k/1:256k/1 --rwmixread=5 --iodepth=1 --numjobs=16 --runtime=1800 --group_reporting --name=4-rand-rw-3xx --size=300g --numa_cpu_nodes=0
4-rand-rw-3xx: (g=0): rw=randrw, bs=(R) 4096B-256KiB, (W) 4096B-256KiB, (T) 4096B-256KiB, ioengine=mmap, iodepth=1
...
fio-3.2-53-ga7d0-dirty
Starting 16 processes
fio: pid=67474, got signal=7
fio: pid=67481, got signal=7
fio: pid=67479, got signal=7
fio: pid=67482, got signal=7
fio: pid=67483, got signal=7
fio: pid=67480, got signal=7
fio: pid=67486, got signal=7
fio: pid=67485, got signal=7
fio: pid=67475, got signal=74),K(5),m(1),K(2),m(1)][0.4%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 29m:52s]
fio: pid=67478, got signal=7
fio: pid=67472, got signal=7
fio: pid=67484, got signal=7
fio: pid=67476, got signal=72),m(2),K(9),m(1)][0.5%][r=0KiB/s,w=0KiB/s][r=0,w=0 IOPS][eta 30m:22s]
fio: pid=67487, got signal=7
fio: pid=67473, got signal=7
fio: pid=67477, got signal=7

4-rand-rw-3xx: (groupid=0, jobs=16): err= 0: pid=67472: Thu Nov 30 23:13:07 2017
  lat (usec)   : 4=12.80%, 10=22.30%, 20=62.03%, 50=2.87%
  cpu          : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwt: total=24,429,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
fio: file hash not empty on exit
dl560g10spmem01:/var/work/fio #



-----Original Message-----
From: Jens Axboe [mailto:axboe@xxxxxxxxx] 
Sent: Thursday, November 30, 2017 11:24 PM
To: Gavriliuk, Anton (HPS Ukraine) <anton.gavriliuk@xxxxxxx>; Robert Elliott (Persistent Memory) <elliott@xxxxxxx>; Rebecca Cran <rebecca@xxxxxxxxxxxx>; Sitsofe Wheeler <sitsofe@xxxxxxxxx>
Cc: fio@xxxxxxxxxxxxxxx; Kani, Toshimitsu <toshi.kani@xxxxxxx>
Subject: Re: fio 3.2

On 11/30/2017 07:17 AM, Gavriliuk, Anton (HPS Ukraine) wrote:
> It's there any chance to fix it ?
> 
> dl560g10spmem01:/var/work # /usr/local/bin/fio --filename=/dev/dax0.0 
> --rw=randrw --refill_buffers --randrepeat=0 --ioengine=mmap 
> --bssplit=4k/4:8k/7:16k/7:32k/15:64k/65:128k/1:256k/1 --rwmixread=5 
> --iodepth=1 --numjobs=16 --runtime=1800 --group_reporting 
> --name=4-rand-rw-3xx --size=290g
> 4-rand-rw-3xx: (g=0): rw=randrw, bs=(R) 4096B-256KiB, (W) 
> 4096B-256KiB, (T) 4096B-256KiB, ioengine=mmap, iodepth=1 ...
> fio-2.99
> Starting 16 processes
> 4-rand-rw-3xx: failed to get file size of /dev/dax0.0
> 
> 4-rand-rw-3xx: failed to get file size of /dev/dax0.0
> 
> 4-rand-rw-3xx: failed to get file size of /dev/dax0.0
> 
> 4-rand-rw-3xx: failed to get file size of /dev/dax0.0
> 
> 4-rand-rw-3xx: failed to get file size of /dev/dax0.0
> 
> 4-rand-rw-3xx: failed to get file size of /dev/dax0.0
> 
> 4-rand-rw-3xx: failed to get file size of /dev/dax0.0

Should already be fixed as of yesterday morning.

--
Jens Axboe

��.n��������+%������w��{.n�������^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�

[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux