Re: [bug report] shared tags causes IO hang and performance drop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2021-04-20 2:52 a.m., Ming Lei wrote:
On Tue, Apr 20, 2021 at 12:54 PM Douglas Gilbert <dgilbert@xxxxxxxxxxxx> wrote:

On 2021-04-19 11:22 p.m., Bart Van Assche wrote:
On 4/19/21 8:06 PM, Douglas Gilbert wrote:
I have always suspected under extreme pressure the block layer (or scsi
mid-level) does strange things, like an IO hang, attempts to prove that
usually lead back to my own code :-). But I have one example recently
where upwards of 10 commands had been submitted (blk_execute_rq_nowait())
and the following one stalled (all on the same thread). Seconds later
those 10 commands reported DID_TIME_OUT, the stalled thread awoke, and
my dd variant went to its conclusion (reporting 10 errors). Following
copies showed no ill effects.

My weapons of choice are sg_dd, actually sgh_dd and sg_mrq_dd. Those last
two monitor for stalls during the copy. Each submitted READ and WRITE
command gets its pack_id from an incrementing atomic and a management
thread in those copies checks every 300 milliseconds that that atomic
value is greater than the previous check. If not, dump the state of the
sg driver. The stalled request was in busy state with a timeout of 1
nanosecond which indicated that blk_execute_rq_nowait() had not been
called. So the chief suspect would be blk_get_request() followed by
the bio setup calls IMO.

So it certainly looked like an IO hang, not a locking, resource nor
corruption issue IMO. That was with a branch off MKP's
5.13/scsi-staging branch taken a few weeks back. So basically
lk 5.12.0-rc1 .

Hi Doug,

If it would be possible to develop a script that reproduces this hang and
if that script can be shared I will help with root-causing and fixing this
hang.

Possible, but not very practical:
     1) apply supplied 83 patches to sg driver
     2) apply pending patch to scsi_debug driver
     3) find a stable kernel platform (perhaps not lk 5.12.0-rc1)
     4) run supplied scripts for three weeks
     5) dig through the output and maybe find one case (there were lots
        of EAGAINs from blk_get_request() but they are expected when
        thrashing the storage layers)

Or collecting the debugfs log after IO hang is triggered in your test:

(cd /sys/kernel/debug/block/$SDEV && find . -type f -exec grep -aH . {} \;)

$SDEV is the disk on which IO hang is observed.

Thanks. I'll try adding that to my IO hang trigger code.

My patches on the sg driver add debugfs support so these produce
the same output:
    cat /proc/scsi/sg/debug
    cat /sys/kernel/debug/scsi_generic/snapshot

There is also a /sys/kernel/debug/scsi_generic/snapped file whose
contents reflect the driver's state when ioctl(<sg_fd>, SG_DEBUG, &one)
was last called.

When I test, the root file system is usually on a NVMe SSD so the
state of all SCSI disks present should be dumped as they are part
of my test. Also I find the netconsole module extremely useful and
have an old laptop on my network running:
   socat udp-recv:6665 - > socat.txt

picking up the UDP packets from netconsole on port 6665. Not quite as
good as monitoring a hardware serial console, but less fiddly. And
most modern laptops don't have access to a serial console so
netconsole is the only option.

Another observation: upper level issues seem to impact the submission
side of request handling (e.g. the IO hang I was trying to describe)
while error injection I can do (e.g. using the scsi_debug driver)
impacts the completion side (obviously). Are there any tools to inject
errors into the block layer submission code?

Doug Gilbert





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux