Re: [bug report] shared tags causes IO hang and performance drop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 14, 2021 at 11:10:39AM +0100, John Garry wrote:
> Hi Ming,
> 
> > 
> > It is reported inside RH that CPU utilization is increased ~20% when
> > running simple FIO test inside VM which disk is built on image stored
> > on XFS/megaraid_sas.
> > 
> > When I try to investigate by reproducing the issue via scsi_debug, I found
> > IO hang when running randread IO(8k, direct IO, libaio) on scsi_debug disk
> > created by the following command:
> > 
> > 	modprobe scsi_debug host_max_queue=128 submit_queues=$NR_CPUS virtual_gb=256
> > 
> 
> So I can recreate this hang for using mq-deadline IO sched for scsi debug,
> in that fio does not exit. I'm using v5.12-rc7.
> 
> Do you have any idea of what changed to cause this, as we would have tested
> this before? Or maybe only none IO sched on scsi_debug. And normally 4k
> block size and only rw=read (for me, anyway).

Just run a quick test with none on scsi_debug, looks the issue can't be
reproduced, but very worse performance is observed with none(20% IOPS drops,
and 50% CPU utilization increased).

> 
> Note that host_max_queue=128 will cap submit queue depth at 128, while would
> be 192 by default.

I take 128 because the reported megaraid_sas's host queue depth is 128.


Thanks, 
Ming




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux