Re: [bug report] shared tags causes IO hang and performance drop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Apr 20, 2021 at 12:54 PM Douglas Gilbert <dgilbert@xxxxxxxxxxxx> wrote:
>
> On 2021-04-19 11:22 p.m., Bart Van Assche wrote:
> > On 4/19/21 8:06 PM, Douglas Gilbert wrote:
> >> I have always suspected under extreme pressure the block layer (or scsi
> >> mid-level) does strange things, like an IO hang, attempts to prove that
> >> usually lead back to my own code :-). But I have one example recently
> >> where upwards of 10 commands had been submitted (blk_execute_rq_nowait())
> >> and the following one stalled (all on the same thread). Seconds later
> >> those 10 commands reported DID_TIME_OUT, the stalled thread awoke, and
> >> my dd variant went to its conclusion (reporting 10 errors). Following
> >> copies showed no ill effects.
> >>
> >> My weapons of choice are sg_dd, actually sgh_dd and sg_mrq_dd. Those last
> >> two monitor for stalls during the copy. Each submitted READ and WRITE
> >> command gets its pack_id from an incrementing atomic and a management
> >> thread in those copies checks every 300 milliseconds that that atomic
> >> value is greater than the previous check. If not, dump the state of the
> >> sg driver. The stalled request was in busy state with a timeout of 1
> >> nanosecond which indicated that blk_execute_rq_nowait() had not been
> >> called. So the chief suspect would be blk_get_request() followed by
> >> the bio setup calls IMO.
> >>
> >> So it certainly looked like an IO hang, not a locking, resource nor
> >> corruption issue IMO. That was with a branch off MKP's
> >> 5.13/scsi-staging branch taken a few weeks back. So basically
> >> lk 5.12.0-rc1 .
> >
> > Hi Doug,
> >
> > If it would be possible to develop a script that reproduces this hang and
> > if that script can be shared I will help with root-causing and fixing this
> > hang.
>
> Possible, but not very practical:
>     1) apply supplied 83 patches to sg driver
>     2) apply pending patch to scsi_debug driver
>     3) find a stable kernel platform (perhaps not lk 5.12.0-rc1)
>     4) run supplied scripts for three weeks
>     5) dig through the output and maybe find one case (there were lots
>        of EAGAINs from blk_get_request() but they are expected when
>        thrashing the storage layers)

Or collecting the debugfs log after IO hang is triggered in your test:

(cd /sys/kernel/debug/block/$SDEV && find . -type f -exec grep -aH . {} \;)

$SDEV is the disk on which IO hang is observed.

Thanks,
Ming




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux