Re: [PATCHv2 1/1] RDMA/rxe: Fix a dead lock problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




在 2022/4/12 22:31, Jason Gunthorpe 写道:
On Tue, Apr 12, 2022 at 10:28:16PM +0800, Yanjun Zhu wrote:
在 2022/4/12 21:53, Jason Gunthorpe 写道:
On Tue, Apr 12, 2022 at 09:43:28PM +0800, Yanjun Zhu wrote:
在 2022/4/11 19:50, Jason Gunthorpe 写道:
On Mon, Apr 11, 2022 at 04:00:18PM -0400, yanjun.zhu@xxxxxxxxx wrote:
@@ -138,8 +139,10 @@ void *rxe_alloc(struct rxe_pool *pool)
    	elem->obj = obj;
    	kref_init(&elem->ref_cnt);
-	err = xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
-			      &pool->next, GFP_KERNEL);
+	xa_lock_irqsave(&pool->xa, flags);
+	err = __xa_alloc_cyclic(&pool->xa, &elem->index, elem, pool->limit,
+				&pool->next, GFP_ATOMIC);
+	xa_unlock_irqrestore(&pool->xa, flags);
No to  using atomics, this needs to be either the _irq or _bh varient
If I understand you correctly, you mean that we should use
xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh instead of
xa_unlock_irqrestore?
This is correct

If so, xa_lock_irq/xa_unlock_irq or xa_lock_bh/xa_unlock_bh is used here,
the warning as below will appear. This means that __rxe_add_to_pool disables
softirq, but fpu_clone enables softirq.
I don't know what this is, you need to show the whole debug.
The followings are the warnings if xa_lock_bh + __xa_alloc(...,GFP_KERNEL)
is used. The diff is as below.

If xa_lock_irqsave/irqrestore + __xa_alloc(...,GFP_ATOMIC) is used,
the waring does not appear.
That is because this was called in an atomic context:

[   92.107490]  __rxe_add_to_pool+0x76/0xa0 [rdma_rxe]
[   92.107500]  rxe_create_ah+0x59/0xe0 [rdma_rxe]
[   92.107511]  _rdma_create_ah+0x148/0x180 [ib_core]
[   92.107546]  rdma_create_ah+0xb7/0xf0 [ib_core]
[   92.107565]  cm_alloc_msg+0x5c/0x170 [ib_cm]
[   92.107577]  cm_alloc_priv_msg+0x1b/0x50 [ib_cm]
[   92.107584]  ib_send_cm_req+0x213/0x3f0 [ib_cm]
[   92.107613]  rdma_connect_locked+0x238/0x8e0 [rdma_cm]
[   92.107637]  rdma_connect+0x2b/0x40 [rdma_cm]
[   92.107646]  ucma_connect+0x128/0x1a0 [rdma_ucm]
[   92.107690]  ucma_write+0xaf/0x140 [rdma_ucm]
[   92.107698]  vfs_write+0xb8/0x370
[   92.107707]  ksys_write+0xbb/0xd0
Meaning the GFP_KERNEL is already wrong.

The AH path needs to have its own special atomic allocation flow and
you have to use an irq lock and GFP_ATOMIC for it.

static struct ib_mad_send_buf *cm_alloc_msg(struct cm_id_private *cm_id_priv)
{
...
spin_lock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
...
        ah = rdma_create_ah(mad_agent->qp->pd, &cm_id_priv->av.ah_attr, 0);

...

spin_unlock(&cm_id_priv->av.port->cm_dev->mad_agent_lock);
...
}
Yes. Exactly.

In cm_alloc_msg, spinlock is used. And __rxe_add_to_pool should not use GFP_KERNEL.

Thanks a lot. I will send the latest patch very soon.


Zhu Yanjun


Jason



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux