This issue appears to only exist in Linux versions 2.6.26 through 4.14 inclusively: With the introduction of commit f56bcd8013566 ("IPoIB: Use separate CQ for UD send completions") work completions are only processed once there are more than 17 outstanding TX work requests. Unfortunately, that also delays the processing of the completion handler and holds on to references held by the "skb" since "dev_kfree_skb_any" won't be called for a very long time. E.g. we've observed "nf_conntrack_cleanup_net_list" spin around for hours until "net->ct.count" goes down to zero on a sufficiently idle interface. This fix arms the TX CQ after those "poll_tx" loops, in order for "ipoib_send_comp_handler" to do its thing: While it's obvious that processing completions one-by-one is more costly than doing so in bulk, holding on to "skb" resources for a potentially unlimited amount of time appears to be a less favorable trade-off. This issue appears to no longer exist in Linux-4.15 and younger, because the following commit does call "ib_req_notify_cq" on "send_cq": 8966e28d2e40c ("IB/ipoib: Use NAPI in UD/TX flows") Fixes: f56bcd8013566 ("IPoIB: Use separate CQ for UD send completions") Signed-off-by: Gerd Rausch <gerd.rausch@xxxxxxxxxx> --- drivers/infiniband/ulp/ipoib/ipoib_ib.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c index 18f732aa15101..b26b31b9e455e 100644 --- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c +++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c @@ -491,8 +491,13 @@ static void drain_tx_cq(struct net_device *dev) struct ipoib_dev_priv *priv = netdev_priv(dev); netif_tx_lock(dev); - while (poll_tx(priv)) - ; /* nothing */ + + do { + while (poll_tx(priv)) + ; /* nothing */ + } while (ib_req_notify_cq(priv->send_cq, + IB_CQ_NEXT_COMP | + IB_CQ_REPORT_MISSED_EVENTS) > 0); if (netif_queue_stopped(dev)) mod_timer(&priv->poll_timer, jiffies + 1); @@ -628,9 +633,14 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb, ++priv->tx_head; } - if (unlikely(priv->tx_outstanding > MAX_SEND_CQE)) - while (poll_tx(priv)) - ; /* nothing */ + if (unlikely(priv->tx_outstanding > MAX_SEND_CQE)) { + do { + while (poll_tx(priv)) + ; /* nothing */ + } while (ib_req_notify_cq(priv->send_cq, + IB_CQ_NEXT_COMP | + IB_CQ_REPORT_MISSED_EVENTS) > 0); + } } static void __ipoib_reap_ah(struct net_device *dev) -- 2.24.1