On Tue, 2 Jul 2019 17:36:34 +0300, Ilya Maximets wrote: > Unlike driver mode, generic xdp receive could be triggered > by different threads on different CPU cores at the same time > leading to the fill and rx queue breakage. For example, this > could happen while sending packets from two processes to the > first interface of veth pair while the second part of it is > open with AF_XDP socket. > > Need to take a lock for each generic receive to avoid race. > > Fixes: c497176cb2e4 ("xsk: add Rx receive functions and poll support") > Signed-off-by: Ilya Maximets <i.maximets@xxxxxxxxxxx> > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c > index a14e8864e4fa..19f41d2b670c 100644 > --- a/net/xdp/xsk.c > +++ b/net/xdp/xsk.c > @@ -119,17 +119,22 @@ int xsk_generic_rcv(struct xdp_sock *xs, struct xdp_buff *xdp) > { > u32 metalen = xdp->data - xdp->data_meta; > u32 len = xdp->data_end - xdp->data; > + unsigned long flags; > void *buffer; > u64 addr; > int err; > > - if (xs->dev != xdp->rxq->dev || xs->queue_id != xdp->rxq->queue_index) > - return -EINVAL; > + spin_lock_irqsave(&xs->rx_lock, flags); Why _irqsave, rather than _bh? > + if (xs->dev != xdp->rxq->dev || xs->queue_id != xdp->rxq->queue_index) { > + err = -EINVAL; > + goto out_unlock; > + } > > if (!xskq_peek_addr(xs->umem->fq, &addr) || > len > xs->umem->chunk_size_nohr - XDP_PACKET_HEADROOM) { > - xs->rx_dropped++; > - return -ENOSPC; > + err = -ENOSPC; > + goto out_drop; > } > > addr += xs->umem->headroom;