RE: [bug report] scsi host hang when running fio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Hi guys,
>
> While investigating the performance issue reported by Ming [0], I am
> seeing
> this hang in certain scenarios:
>
> tivated0KB /s] [0/0/0 iops] [eta 1158048815d:13h:31m:49s] [ 740.499917]
> rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:ops] [eta
> 34722d:05h:17m:25s] [ 740.505994] rcu: Tasks blocked on level-1 rcu_node
> (CPUs 0-15):
> [ 740.511982] (detected by 64, t=5255 jiffies, g=6105, q=6697) [
> 740.517703]
> rcu: All QSes seen, last rcu_preempt kthread activity 0 (4295075897-
> 4295075897), jiffies_till_next_fqs=1, root ->qsmask 0x1 [ 740.723625] BUG:
> scheduling while atomic: swapper/64/0/0x00000008 [ 740.729692] Modules
> linked in:
> [ 740.732737] CPU: 64 PID: 0 Comm: swapper/64 Tainted: G W 5.12.0-rc7-
> g7589ed97c1da-dirty #322 [ 740.742432] Hardware name: Huawei TaiShan
> 2280 V2/BC82AMDC, BIOS
> 2280-V2 CS V5.B133.01 03/25/2021
> [ 740.751264] Call trace:
> [ 740.753699] dump_backtrace+0x0/0x1b0
> [ 740.757353] show_stack+0x18/0x68
> [ 740.760654] dump_stack+0xd8/0x134
> [ 740.764046] __schedule_bug+0x60/0x78
> [ 740.767694] __schedule+0x620/0x6d8
> [ 740.771168] schedule_idle+0x20/0x40
> [ 740.774730] do_idle+0x19c/0x278
> [ 740.777945] cpu_startup_entry+0x24/0x68 [ 740.781850]
> secondary_start_kernel+0x178/0x188
> [ 740.786362] 0x0
> ^Cbs: 12 (f=12): [r(12)] [0.0% done] [1626MB/0KB/0KB /s] [416K/0/0 iops]
> [eta
> 34722d:05h:16m:28s]
> fio: terminating on signal 2
>
> I thought it merited a separate thread.
>
> [ 740.723625] BUG: scheduling while atomic: swapper/64/0/0x00000008
> Looks bad ...
>
> The scenario to create seems to be running fio with rw=randread and mq-
> deadline IO scheduler. And heavily loading the system - running fio on a
> subset of available CPUs seems to help (recreate).
>
> When it occurs, the system becomes totally unresponsive.
>
> It could be a LLDD bug, but I am doubtful.
>
> Has anyone else seen this or help try to recreate?

John - I have not seen such issue on megaraid_sas driver. Is this something
to do with CPU lock up ?
Can you try your test with "rq_affinity=2" ? megaraid_sas driver detect CPU
lockup (flood of completion on single CPU) and it use irq_poll interface to
avoid such loop.
Since you mentioned you noticed issue with hisi_sas v2 without hostwide tag
I can think of similar stuffs in this case.

How cpus to irq affinity settled in your case. ? Is it 1-1 mapping ?

Kashyap

>
> scsi debug or null_blk don't seem to load the system heavily enough to
> recreate.
>
> I have seen it on 5.11 also. I see it on hisi_sas v2 and v3 hw drivers,
> And I don't
> think it's related to hostwide tags, as for hisi_sas v2 hw driver, I unset
> that flag
> and can still see it.
>
> Thanks,
> John
>
> [0]
> https://lore.kernel.org/linux-scsi/89ebc37c-21d6-c57e-4267-
> cac49a3e5953@xxxxxxxxxx/T/#t

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux