Re: [PATCH] scsi: core: move scsi_host_busy() out of host lock for waking up EH handler

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 12, 2024 at 7:43 AM Ming Lei <ming.lei@xxxxxxxxxx> wrote:
>
> On Fri, Jan 12, 2024 at 12:12:57PM +0100, Hannes Reinecke wrote:
> > On 1/12/24 08:00, Ming Lei wrote:
> > > Inside scsi_eh_wakeup(), scsi_host_busy() is called & checked with host lock
> > > every time for deciding if error handler kthread needs to be waken up.
> > >
> > > This way can be too heavy in case of recovery, such as:
> > >
> > > - N hardware queues
> > > - queue depth is M for each hardware queue
> > > - each scsi_host_busy() iterates over (N * M) tag/requests
> > >
> > > If recovery is triggered in case that all requests are in-flight, each
> > > scsi_eh_wakeup() is strictly serialized, when scsi_eh_wakeup() is called
> > > for the last in-flight request, scsi_host_busy() has been run for (N * M - 1)
> > > times, and request has been iterated for (N*M - 1) * (N * M) times.
> > >
> > > If both N and M are big enough, hard lockup can be triggered on acquiring
> > > host lock, and it is observed on mpi3mr(128 hw queues, queue depth 8169).
> > >
> > > Fix the issue by calling scsi_host_busy() outside host lock, and we
> > > don't need host lock for getting busy count because host lock never
> > > covers that.
> > >
> > Can you share details for the hard lockup?
> > I do agree that scsi_host_busy() is an expensive operation, so it
> > might not be ideal to call it under a spin lock.
> > But I wonder where the lockup comes in here.
> > Care to explain?
>
> Recovery happens when there is N * M inflight requests, then scsi_dec_host_busy()
> can be called for each inflight request/scmnd from irq context.
>
> host lock serializes every scsi_eh_wakeup().
>
> Given each hardware queue has its own irq handler, so there could be one
> request, scsi_dec_host_busy() is called and the host lock is spinned until
> it is released from scsi_dec_host_busy() for all requests from all other
> hardware queues.
>
> The spin time can be long enough to trigger the hard lockup if N and M
> is big enough, and the total wait time can be:
>
>         (N - 1) * M * time_taken_in_scsi_host_busy().
>
> Meantime the same story happens on scsi_eh_inc_host_failed() which is
> called from softirq context, so host lock spin can be much more worse.
>
> It is observed on mpi3mr with 128(N) hw queues and 8169(M) queue depth.
>
> >
> > And if it leads to a lockup, aren't other instances calling scsi_host_busy()
> > under a spinlock affected, as well?
>
> It is only possible when it is called in per-command situation.
>
>
> Thanks,
> Ming
>

I can't see why this wouldn't work, or cause a problem with a lost wakeup,
but the cost of iterating to obtain the host_busy value is still being paid,
just outside the host_lock.  If this has triggered a hard lockup, should
we revisit the algorithm, e.g. are we still delaying EH wakeup for a noticeable
amount of time?  O(n^2) algorithms in the kernel don't seem like the best idea.

In any case...
Reviewed-by: Ewan D. Milne <emilne@xxxxxxxxxx>

-Ewan






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux