Re: [PATCH rdma-rc 4/9] IB/ipoib: Fix double free of skb in case of multicast traffic in CM mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 13, 2020 at 08:36:22PM +0200, Leon Romanovsky wrote:
> On Thu, Feb 13, 2020 at 02:26:55PM -0400, Jason Gunthorpe wrote:
> > On Thu, Feb 13, 2020 at 08:10:12PM +0200, Leon Romanovsky wrote:
> > > On Thu, Feb 13, 2020 at 11:37:43AM -0400, Jason Gunthorpe wrote:
> > > > On Wed, Feb 12, 2020 at 09:26:30AM +0200, Leon Romanovsky wrote:
> > > >
> > > > > diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
> > > > > index 2aa3457a30ce..c614cb87d09b 100644
> > > > > +++ b/drivers/infiniband/ulp/ipoib/ipoib.h
> > > > > @@ -379,6 +379,7 @@ struct ipoib_dev_priv {
> > > > >  	struct ipoib_tx_buf *tx_ring;
> > > > >  	unsigned int	     tx_head;
> > > > >  	unsigned int	     tx_tail;
> > > > > +	atomic_t             tx_outstanding;
> > > > >  	struct ib_sge	     tx_sge[MAX_SKB_FRAGS + 1];
> > > > >  	struct ib_ud_wr      tx_wr;
> > > > >  	struct ib_wc	     send_wc[MAX_SEND_CQE];
> > > > > diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> > > > > index c59e00a0881f..db6aace83fe5 100644
> > > > > +++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
> > > > > @@ -756,7 +756,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
> > > > >  		return;
> > > > >  	}
> > > > >
> > > > > -	if ((priv->tx_head - priv->tx_tail) == ipoib_sendq_size - 1) {
> > > > > +	if (atomic_read(&priv->tx_outstanding) == ipoib_sendq_size - 1) {
> > > > >  		ipoib_dbg(priv, "TX ring 0x%x full, stopping kernel net queue\n",
> > > > >  			  tx->qp->qp_num);
> > > > >  		netif_stop_queue(dev);
> > > > > @@ -786,7 +786,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
> > > > >  	} else {
> > > > >  		netif_trans_update(dev);
> > > > >  		++tx->tx_head;
> > > > > -		++priv->tx_head;
> > > > > +		atomic_inc(&priv->tx_outstanding);
> > > > >  	}
> > > >
> > > > This use of an atomic is very weird, probably wrong.
> > > >
> > > > Why is it an atomic?  priv->tx_head wasn't an atomic, and every place
> > > > touching tx_outstanding was also touching tx_head.
> > > >
> > > > I assume there is some hidden locking here? Or much more stuff is
> > > > busted up.
> > > >
> > > > In that case, drop the atomic.
> > > >
> > > > However, if the atomic is needed (where/why?) then something has to
> > > > be dealing with the races, and if the write side is fully locked then
> > > > an atomic is the wrong choice, use READ_ONCE/WRITE_ONCE instead
> > >
> > > I thought that atomic_t is appropriate here. Valentin wanted to
> > > implement "window" and he needed to ensure that this window is counted
> > > correctly while it doesn't need to be 100% accurate.
> >
> > It does need to be 100% accurate because the stop_queue condition is:
> >
> >  +	if (atomic_read(&priv->tx_outstanding) == ipoib_sendq_size - 1) {
> 
> So better to write ">=" instead of "=".

Then you risk overflowing the send q.

The required scheme appears to be to globally bound the # of packets
outstanding to tx such that the global bound is lower than any of the
(many) QP's sendq, thus there will always be a place to put the
packets.

So it must be exact or at least pessimistic.

This isn't a multiq netdev, so AFAICT, the tx side is single threaded
by netdev, and it looks like the wc side was single threaded by NAPI
as well.

The old code used two variable, each written in a single threaded way,
and then read to generate the state. With the missing READ_ONCE added,
it should look like this:

    if ((priv->tx_head - READ_ONCE(priv->tx_tail)) == ipoib_sendq_size - 1) {

I feel like sticking with the single writer packets posted/completed
scheme is better and clearer than trying to collapse it into a single
atomic.

> > And presumably the whole problem here is overflowing some sized q
> > opbject.
> >
> > That atomic is only needed if there is concurrency, and where is the
> > concurrency?
> 
> Depends on if you believe to description in commit message :)

I do.. It is obviously wrong now that you point to it :) Each QP must
have exclusive control of its head/tail pointers, having a CM QP reach
into the head/tail of the UD QP is certainly wrong.

Jason



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux