Re: Random high CPU utilization in blk-mq with the none scheduler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dexuan,

On Sat, Dec 11, 2021 at 03:10:43AM +0000, Dexuan Cui wrote:
> > From: Jens Axboe <axboe@xxxxxxxxx>
> > Sent: Friday, December 10, 2021 6:05 PM
> > ...
> > It's more likely the real fix is avoiding the repeated plug list scan,
> > which I guess makes sense. That is this commit:
> > 
> > commit d38a9c04c0d5637a828269dccb9703d42d40d42b
> > Author: Jens Axboe <axboe@xxxxxxxxx>
> > Date:   Thu Oct 14 07:24:07 2021 -0600
> > 
> >     block: only check previous entry for plug merge attempt
> > 
> > If that's the case, try 5.15.x again and do:
> > 
> > echo 2 > /sys/block/<dev>/queue/nomerges
> > 
> > for each drive you are using in the IO test, and see if that gets
> > rid of the excess CPU usage.
> > 
> > --
> > Jens Axboe
> 
> Thanks for the reply! Unluckily this does not work.
> 
> I tried the below command:
> 
> for i in `ls /sys/block/*/queue/nomerges`; do echo 2 > $i; done
> 
> and verified that the "nomerges" are changed to "2", but the
> excess CPU usage can still reproduce easily.

Can you provide the following blk-mq debugfs log?

(cd /sys/kernel/debug/block/dm-N && find . -type f -exec grep -aH . {} \;)

(cd /sys/kernel/debug/block/sdN && find . -type f -exec grep -aH . {} \;)

And it is enough to just collect log from one dm-mpath & one underlying iscsi disk,
so we can understand basic blk-mq setting, such as nr_hw_queues, queue depths, ...



Thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux