Re: [bug report] shared tags causes IO hang and performance drop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 15/04/2021 04:46, Ming Lei wrote:
CPU util			IOPs
mq-deadline	usr=21.72%, sys=44.18%,		423K
none		usr=23.15%, sys=74.01%		450K
Today I re-run the scsi_debug test on two server hardwares(32cores, dual
numa nodes), and the CPU utilization issue can be reproduced, follow
the test result:


I haven't forgotten about this.

I finally got your .config working in x86 qemu with only a 4-CPU system.

1) randread test on ibm-x3850x6[*] with deadline

               |IOPS    | FIO CPU util
------------------------------------------------
hosttags      | 94k    | usr=1.13%, sys=14.75%
------------------------------------------------
non hosttags  | 124k   | usr=1.12%, sys=10.65%,


Getting these results for mq-deadline:

hosttags
100K cpu 1.52 4.47

non-hosttags
109K cpu 1.74 5.49

So I still don't see the same CPU usage increase for hosttags.

But throughput is down, so at least I can check on that...


2) randread test on ibm-x3850x6[*] with none
               |IOPS    | FIO CPU util
------------------------------------------------
hosttags      | 120k   | usr=0.89%, sys=6.55%
------------------------------------------------
non hosttags  | 121k   | usr=1.07%, sys=7.35%
------------------------------------------------


Here I get:
hosttags
113K cpu 2.04 5.83

non-hosttags
108K cpu 1.71 5.05

Thanks,
John



  *:
  	- that is the machine Yanhui reported VM cpu utilization increased by 20%
	- kernel: latest linus tree(v5.12-rc7, commit: 7f75285ca57)
	- also run same test on another 32cores machine, IOPS drop isn't
	  observed, but CPU utilization is increased obviously

3) test script
#/bin/bash

run_fio() {
	RTIME=$1
	JOBS=$2
	DEVS=$3
	BS=$4

	QD=64
	BATCH=16

	fio --bs=$BS --ioengine=libaio \
		--iodepth=$QD \
	    --iodepth_batch_submit=$BATCH \
		--iodepth_batch_complete_min=$BATCH \
		--filename=$DEVS \
		--direct=1 --runtime=$RTIME --numjobs=$JOBS --rw=randread \
		--name=test --group_reporting
}

SCHED=$1

NRQS=`getconf _NPROCESSORS_ONLN`

rmmod scsi_debug
modprobe scsi_debug host_max_queue=128 submit_queues=$NRQS virtual_gb=256
sleep 2
DEV=`lsscsi | grep scsi_debug | awk '{print $6}'`
echo $SCHED >/sys/block/`basename $DEV`/queue/scheduler
echo 128 >/sys/block/`basename $DEV`/device/queue_depth
run_fio 20 16 $DEV 8K


rmmod scsi_debug
modprobe scsi_debug max_queue=128 submit_queues=1 virtual_gb=256
sleep 2
DEV=`lsscsi | grep scsi_debug | awk '{print $6}'`
echo $SCHED >/sys/block/`basename $DEV`/queue/scheduler
echo 128 >/sys/block/`basename $DEV`/device/queue_depth
run_fio 20 16 $DEV 8k




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux