Re: [Comments Needed] scan vs remove_target deadlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Mike Christie wrote:
- The plugged queue logic needs to be tracked down. Anyone have any
  insights ?

Are you referring to a problem in a function like our prep or request fn where we could plug the queue if the device state is not online?

I believe so. One signature is that the scan i/o failed, starting recovery,
and right as recovery started, the sdev blocked, which caused recovery to
fail and sdev to be taken offline.

> Do we
need the scan mutex to change the device state? I mean if a scan has the mutex lock, and the transport class decides to remove the device it tries to grab the scan_mutex by calling scsi_remove_device, but if we moved the state change invocation before the scan_mutex is taken in scsi_remove_device then, I assume eventually the device should get unplugged, the prep or request_fn will see the new state and fail the request.

This may be what's needed. I don't understand all of this path yet, so I
can only speculate (and likely w/ error). Thus, the questions.

I am also asking because if you change the state from userspace we do not grab the scan_mutex so I did not know if that was bug.


- The scan mutex, as coarse as it is, is really broken. It would be
  great to reduce the lock holding so the lock isn't held while an
  i/o is pending.  This change would be extremely invasive to the scan
  code. Any other alternatives ?
- If an sdev is "blocked", we shouldn't be wasting our time scanning it.
  Should we be adding this checks before sending each scan i/o, or is
  there a better lower-level function place this check ? Should we be

are you referring to our request or prep fn or scsi_dispatch_cmd or something else like when the scan code calls the scsi_execute?

Well, we want to stop the i/o before it gets on the request queue, thus it
should be in scsi_execute(). We tried modifying scsi_execute() to bounce
i/o's (w/ DID_NO_CONNECT) if the sdev was offline.  This made the deadlock
harder to hit, but didn't remove it. We'll probably be augmenting this
with a check for the blocked state as well.

Once the i/o is on the queue, we have to ensure the queues get run, and the
LLDD/transport will bounce the i/o. However, we would have liked to avoid
the delay while waiting on the queue to run.


  creating an explicit return code, or continue to piggy-back on
  DID_NO_CONNECT ? How do we deal with a scan i/o which may already be
  queued when the device is blocked ?

Are you thinking about adding a abort/cancel_request() callback to the request_queue? Something that could also be called by upper layers like DM, and could eventually end up calling a scsi abort function?

Actually - yes, and I knew things like DM would have liked this. There's
2 states we have to be aware of though... 1st state is where the i/o is
queued, but not yet given to the LLDD (which I believe is the stall case
that is hurting us here). The 2nd case is where the i/o has been issued
to the LLDD, but the sdev blocked is not blocked when the abort is issued.
Unfortunately, since the block state implies a loss of connectivity, we
can't send the abort, so we have to sit and wait until the block times out.
No real way to avoid this delay.

Hope this makes sense.

-- james s

-
: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux