RE: fio 3.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Fio numa placement doesn't work, when I remove taskset and add
> numa_cpu_nodes
>

> dl560g10spmem01:/var/work/fio # taskset -c 0-19 /usr/local/bin/fio --filename=/dev/dax0.0 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=mmap --bssplit=4k/4:8k/7:16k/7:32k/15:64k/65:128k/1:256k/1 --rwmixread=5 --iodepth=1 --numjobs=16 --runtime=1800 --group_reporting --name=4-rand-rw-3xx --size=290g

> dl560g10spmem01:/var/work/fio # /usr/local/bin/fio --
> filename=/dev/dax0.0 --rw=randrw --refill_buffers --norandommap --
> randrepeat=0 --ioengine=mmap --
> bssplit=4k/4:8k/7:16k/7:32k/15:64k/65:128k/1:256k/1 --rwmixread=5 --
> iodepth=1 --numjobs=16 --runtime=1800 --group_reporting --name=4-
> rand-rw-3xx --size=300g --numa_cpu_nodes=0
> Starting 16 processes
> fio: pid=67474, got signal=7

signal=7 occurs if the size is too big; you increased the size by 10 GiB and
probably hit the limit (not accounting for the ndctl metadata overhead).
It's probably a 300 GiB capacity 

While discussing NUMA, I'll mention something else I saw in Windows 
while fixing the thread affinities there.

At startup, fio spawns threads on all CPUs to measure the clocks
(fio_monotonic_clocktest).  If you've constrained the CPU affinity
outside fio, some of those will fail.  In Windows, something like
START /AFFINITY 0x55555555 fio ...
can cause half of the clock threads to fail.

My Windows machine stopped working around then, so I haven't gotten
around to trying a fix for that yet.


��.n��������+%������w��{.n�������^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�

[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux