On 8/5/21 6:50 PM, yukuai (C) wrote: > After applying this configuration, the number of null_blk in my > machine is about 650k(330k before). Is this still too low? That seems low to me. If I run the attached script on a six year old desktop with an eight core i7-4790 CPU it reports a little more than 5 million IOPS. Has kernel debugging perhaps been enabled in the kernel on the test setup? Or is the system perhaps slowed down by security mitigations? > By the way, there are no performance degradation. Please wait with drawing a conclusion until you can run a workload on your setup of several million IOPS. Thanks, Bart.
#!/bin/bash if [ -e /sys/kernel/config/nullb ]; then for d in /sys/kernel/config/nullb/*; do [ -d "$d" ] && rmdir "$d" done fi numcpus=$(grep -c ^processor /proc/cpuinfo) modprobe -r null_blk [ -e /sys/module/null_blk ] && exit $? modprobe null_blk nr_devices=0 && udevadm settle && cd /sys/kernel/config/nullb && mkdir nullb0 && cd nullb0 && echo 0 > completion_nsec && echo 512 > blocksize && echo 0 > home_node && echo 0 > irqmode && echo 1024 > size && echo 0 > memory_backed && echo 2 > queue_mode && echo 1 > power || exit $? ( cd /sys/block/nullb0/queue && echo 2 > rq_affinity ) || exit $? iodepth=${1:-1} runtime=30 args=() if [ "$iodepth" = 1 ]; then args+=(--ioengine=psync) else args+=(--ioengine=io_uring --iodepth_batch=$((iodepth/2))) fi args+=(--iodepth=$iodepth --name=nullb0 --filename=/dev/nullb0\ --rw=read --bs=4096 --loops=$((1<<20)) --direct=1 --numjobs=$numcpus \ --thread --runtime=$runtime --invalidate=1 --gtod_reduce=1 \ --group_reporting=1 --ioscheduler=none) if numactl -m 0 -N 0 echo >&/dev/null; then numactl -m 0 -N 0 -- fio "${args[@]}" else fio "${args[@]}" fi