Re: [bug report] scsi host hang when running fio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 19/04/2021 12:43, Kashyap Desai wrote:
Hi guys,

While investigating the performance issue reported by Ming [0], I am
seeing
this hang in certain scenarios:

tivated0KB /s] [0/0/0 iops] [eta 1158048815d:13h:31m:49s] [ 740.499917]
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:ops] [eta
34722d:05h:17m:25s] [ 740.505994] rcu: Tasks blocked on level-1 rcu_node
(CPUs 0-15):
[ 740.511982] (detected by 64, t=5255 jiffies, g=6105, q=6697) [
740.517703]
rcu: All QSes seen, last rcu_preempt kthread activity 0 (4295075897-
4295075897), jiffies_till_next_fqs=1, root ->qsmask 0x1 [ 740.723625] BUG:
scheduling while atomic: swapper/64/0/0x00000008 [ 740.729692] Modules
linked in:
[ 740.732737] CPU: 64 PID: 0 Comm: swapper/64 Tainted: G W 5.12.0-rc7-
g7589ed97c1da-dirty #322 [ 740.742432] Hardware name: Huawei TaiShan
2280 V2/BC82AMDC, BIOS
2280-V2 CS V5.B133.01 03/25/2021
[ 740.751264] Call trace:
[ 740.753699] dump_backtrace+0x0/0x1b0
[ 740.757353] show_stack+0x18/0x68
[ 740.760654] dump_stack+0xd8/0x134
[ 740.764046] __schedule_bug+0x60/0x78
[ 740.767694] __schedule+0x620/0x6d8
[ 740.771168] schedule_idle+0x20/0x40
[ 740.774730] do_idle+0x19c/0x278
[ 740.777945] cpu_startup_entry+0x24/0x68 [ 740.781850]
secondary_start_kernel+0x178/0x188
[ 740.786362] 0x0
^Cbs: 12 (f=12): [r(12)] [0.0% done] [1626MB/0KB/0KB /s] [416K/0/0 iops]
[eta
34722d:05h:16m:28s]
fio: terminating on signal 2

I thought it merited a separate thread.

[ 740.723625] BUG: scheduling while atomic: swapper/64/0/0x00000008
Looks bad ...

The scenario to create seems to be running fio with rw=randread and mq-
deadline IO scheduler. And heavily loading the system - running fio on a
subset of available CPUs seems to help (recreate).

When it occurs, the system becomes totally unresponsive.

It could be a LLDD bug, but I am doubtful.

Has anyone else seen this or help try to recreate?
John - I have not seen such issue on megaraid_sas driver. Is this something
to do with CPU lock up ?

JFYI, this appears to be an issue of combination of threaded irq handler and managed interrupts. I raised the issue with Thomas:

https://lore.kernel.org/lkml/874kfxw9zv.ffs@xxxxxxxxxxxxxxxxxxxxxxx/

Maybe NVMe PCI could have the same issue.

Thanks,
John





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux