Re: [PATCH v1] RDMA/core: Fix check_flush_dependency splat on addr_wq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 29, 2022 at 06:15:28PM +0000, Chuck Lever III wrote:
> 
> 
> > On Aug 29, 2022, at 1:22 PM, Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> > 
> > On Mon, Aug 29, 2022 at 05:14:56PM +0000, Chuck Lever III wrote:
> >> 
> >> 
> >>> On Aug 29, 2022, at 12:45 PM, Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> >>> 
> >>> On Fri, Aug 26, 2022 at 07:57:04PM +0000, Chuck Lever III wrote:
> >>>> The connect APIs would be a place to start. In the meantime, though...
> >>>> 
> >>>> Two or three years ago I spent some effort to ensure that closing
> >>>> an RDMA connection leaves a client-side RPC/RDMA transport with no
> >>>> RDMA resources associated with it. It releases the CQs, QP, and all
> >>>> the MRs. That makes initial connect and reconnect both behave exactly
> >>>> the same, and guarantees that a reconnect does not get stuck with
> >>>> an old CQ that is no longer working or a QP that is in TIMEWAIT.
> >>>> 
> >>>> However that does mean that substantial resource allocation is
> >>>> done on every reconnect.
> >>> 
> >>> And if the resource allocations fail then what happens? The storage
> >>> ULP retries forever and is effectively deadlocked?
> >> 
> >> The reconnection attempt fails, and any resources allocated during
> >> that attempt are released. The ULP waits a bit then tries again
> >> until it works or is interrupted.
> >> 
> >> A deadlock might occur if one of those allocations triggers
> >> additional reclaim activity.
> > 
> > No, you are deadlocked now.
> 
> GFP_KERNEL can and will give up eventually, in which case
> the connection attempt fails and any previously allocated
> memory is released. Something else can then make progress.

Something else, maybe for a time, but likely the storage stack is
forever stuck.

> Single page allocation nearly always succeeds. It's the
> larger-order allocations that can block for long periods,
> and that's not necessarily because memory is low -- it can
> happen when one NUMA node's memory is heavily fragmented.

We've done a lot of work in the rdma stack and drivers to avoid
multi-page allocations. But we might need a lot of them, and I'm
skeptical about this claim they always succeed.

> This issue seems to be addressed in the socket stack, so I
> don't believe there's _no_ solution for RDMA. Usually the
> trick is to communicate the memalloc_noio settings somehow
> to other allocating threads.

And how do you do that when the other threads may have already started
their work before a reclaim writeback is triggered? We may already be
blocked inside GFP_KERNEL - heck we may already be inside reclaim from
within one of our own threads!

> If nothing else we can talk with the MM folks about planning
> improvements. We've just gone through this with NFS on the
> socket stack.

I'm not aware of any work in this area.. 
 
> > Even a simple case like mlx5 may cause the NIC to trigger a host
> > memory allocation, which is done in another thread and done as a
> > normal GFP_KERNEL. This memory allocation must progress before a
> > CQ/QP/MR/etc can be created. So now we are deadlocked again.
> 
> That sounds to me like a bug in mlx5. The driver is supposed
> to respect the caller's GFP settings. Again, if the request
> is small, it's likely to succeed anyway, but larger requests
> are not reliable and need to fail quickly so the system can
> move onto other fishing spots.

It is a design artifact, the FW is the one requesting the memory and
it has no idea about kernel GFP flags. As above a FW thread could have
already started requesting memory for some other purpose and we may
already be inside the mlx5 FW page request thread under a GFP_KERNEL
allocation doing reclaim. How can this ever be fixed?

> I would like to at least get rid of the check_flush_dependency
> splat, which will fire a lot more often than we will get stuck
> in a reclaim allocation corner. I'm testing a patch that
> converts rpcrdma not to use MEM_RECLAIM work queues and notes
> how extensive the problem actually is.

Ok

Jason



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux