Re: [PATCH] blk-mq: allow hardware queue to get more tag while sharing a tag set

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/2/21 6:34 AM, yukuai (C) wrote:
I run a test on both null_blk and nvme, results show that there are no
performance degradation:

test platform: x86
test cpu: 2 nodes, total 72
test scheduler: none
test device: null_blk / nvme

test cmd: fio -filename=/dev/xxx -name=test -ioengine=libaio -direct=1
-numjobs=72 -iodepth=16 -bs=4k -rw=write -offset_increment=1G
-cpus_allowed=0:71 -cpus_allowed_policy=split -group_reporting
-runtime=120

test results: iops
1) null_blk before this patch: 280k
2) null_blk after this patch: 282k
3) nvme before this patch: 378k
4) nvme after this patch: 384k

Please use io_uring for performance tests.

The null_blk numbers seem way too low to me. If I run a null_blk performance test inside a VM with 6 CPU cores (Xeon W-2135 CPU) I see about 6 million IOPS for synchronous I/O and about 4.4 million IOPS when using libaio. The options I used and that are not in the above command line are: --thread --gtod_reduce=1 --ioscheduler=none.

Thanks,

Bart.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux