Re: [bug report] shared tags causes IO hang and performance drop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2021-04-16 10:59 a.m., John Garry wrote:
On 16/04/2021 09:39, Ming Lei wrote:
On Fri, Apr 16, 2021 at 09:29:37AM +0100, John Garry wrote:
On 16/04/2021 01:46, Ming Lei wrote:
I can't seem to recreate your same issue. Are you mainline defconfig, or a
special disto config?
The config is rhel8 config.

Can you share that? Has anyone tested against mainline x86 config?
Sure, see the attachment.

Thanks. I assume that this is not seen on mainline x86 defconfig.

Unfortunately it's anything but easy for me to install an x86 kernel ATM.

And I am still seeing this on hisi_sas v2 hw with 5.12-rc7:

[  214.448368] rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
[  214.454468] rcu:Tasks blocked on level-1 rcu_node (CPUs 0-15):
[  214.460474]  (detected by 40, t=5255 jiffies, g=2229, q=1110)
[  214.466208] rcu: All QSes seen, last rcu_preempt kthread activity 1 (4294945760-4294945759), jiffies_till_next_fqs=1, root ->qsmask 0x1
[  214.478466] BUG: scheduling while atomic: irq/151-hisi_sa/503/0x00000004
[  214.485162] Modules linked in:
[  214.488208] CPU: 40 PID: 503 Comm: irq/151-hisi_sa Not tainted 5.11.0 #75
[  214.494985] Hardware name: Huawei Taishan 2280 /D05, BIOS Hisilicon D05 IT21 Nemo 2.0 RC0 04/18/2018
[  214.504105] Call trace:
[  214.506540]  dump_backtrace+0x0/0x1b0
[  214.510208]  show_stack+0x18/0x68
[  214.513513]  dump_stack+0xd8/0x134
[  214.516907]  __schedule_bug+0x60/0x78
[  214.520560]  __schedule+0x620/0x6d8
[  214.524039]  schedule+0x70/0x108
[  214.527256]  irq_thread+0xdc/0x230
[  214.530648]  kthread+0x154/0x158
[  214.533866]  ret_from_fork+0x10/0x30
john@ubuntu:~$

For rw=randread and mq-deadline only, it seems. v5.11 has the same. Not sure if this is a driver or other issue.

Today I don't have access to other boards with enough disks to get a decent throughput to test against :(

I have always suspected under extreme pressure the block layer (or scsi
mid-level) does strange things, like an IO hang, attempts to prove that
usually lead back to my own code :-). But I have one example recently
where upwards of 10 commands had been submitted (blk_execute_rq_nowait())
and the following one stalled (all on the same thread). Seconds later
those 10 commands reported DID_TIME_OUT, the stalled thread awoke, and
my dd variant went to its conclusion (reporting 10 errors). Following
copies showed no ill effects.

My weapons of choice are sg_dd, actually sgh_dd and sg_mrq_dd. Those last
two monitor for stalls during the copy. Each submitted READ and WRITE
command gets its pack_id from an incrementing atomic and a management
thread in those copies checks every 300 milliseconds that that atomic
value is greater than the previous check. If not, dump the state of the
sg driver. The stalled request was in busy state with a timeout of 1
nanosecond which indicated that blk_execute_rq_nowait() had not been
called. So the chief suspect would be blk_get_request() followed by
the bio setup calls IMO.

So it certainly looked like an IO hang, not a locking, resource nor
corruption issue IMO. That was with a branch off MKP's
5.13/scsi-staging branch taken a few weeks back. So basically
lk 5.12.0-rc1 .

Doug Gilbert





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux