On Thu, Dec 22, 2022 at 11:18 AM Paolo Abeni <pabeni@xxxxxxxxxx> wrote: > > On Tue, 2022-12-20 at 12:59 -0600, Shawn Bohrer wrote: > > When AF_XDP is used on on a veth interface the RX ring is updated in two > > steps. veth_xdp_rcv() removes packet descriptors from the FILL ring > > fills them and places them in the RX ring updating the cached_prod > > pointer. Later xdp_do_flush() syncs the RX ring prod pointer with the > > cached_prod pointer allowing user-space to see the recently filled in > > descriptors. The rings are intended to be SPSC, however the existing > > order in veth_poll allows the xdp_do_flush() to run concurrently with > > another CPU creating a race condition that allows user-space to see old > > or uninitialized descriptors in the RX ring. This bug has been observed > > in production systems. > > > > To summarize, we are expecting this ordering: > > > > CPU 0 __xsk_rcv_zc() > > CPU 0 __xsk_map_flush() > > CPU 2 __xsk_rcv_zc() > > CPU 2 __xsk_map_flush() > > > > But we are seeing this order: > > > > CPU 0 __xsk_rcv_zc() > > CPU 2 __xsk_rcv_zc() > > CPU 0 __xsk_map_flush() > > CPU 2 __xsk_map_flush() > > > > This occurs because we rely on NAPI to ensure that only one napi_poll > > handler is running at a time for the given veth receive queue. > > napi_schedule_prep() will prevent multiple instances from getting > > scheduled. However calling napi_complete_done() signals that this > > napi_poll is complete and allows subsequent calls to > > napi_schedule_prep() and __napi_schedule() to succeed in scheduling a > > concurrent napi_poll before the xdp_do_flush() has been called. For the > > veth driver a concurrent call to napi_schedule_prep() and > > __napi_schedule() can occur on a different CPU because the veth xmit > > path can additionally schedule a napi_poll creating the race. > > The above looks like a generic problem that other drivers could hit. > Perhaps it could be worthy updating the xdp_do_flush() doc text to > explicitly mention it must be called before napi_complete_done(). Good observation. I took a quick peek at this and it seems there are at least 5 more drivers that can call napi_complete_done() before xdp_do_flush(): drivers/net/ethernet/qlogic/qede/ drivers/net/ethernet/freescale/dpaa2 drivers/net/ethernet/freescale/dpaa drivers/net/ethernet/microchip/lan966x drivers/net/virtio_net.c The question is then if this race can occur on these five drivers. Dpaa2 has AF_XDP zero-copy support implemented, so it can suffer from this race as the Tx zero-copy path is basically just a napi_schedule() and it can be called/invoked from multiple processes at the same time. In regards to the others, I do not know. Would it be prudent to just switch the order of xdp_do_flush() and napi_complete_done() in all these drivers, or would that be too defensive? > (in a separate, net-next patch) > > Thanks! > > Paolo > > > > > The fix as suggested by Magnus Karlsson, is to simply move the > > xdp_do_flush() call before napi_complete_done(). This syncs the > > producer ring pointers before another instance of napi_poll can be > > scheduled on another CPU. It will also slightly improve performance by > > moving the flush closer to when the descriptors were placed in the > > RX ring. > > > > Fixes: d1396004dd86 ("veth: Add XDP TX and REDIRECT") > > Suggested-by: Magnus Karlsson <magnus.karlsson@xxxxxxxxx> > > Signed-off-by: Shawn Bohrer <sbohrer@xxxxxxxxxxxxxx> > > --- > > drivers/net/veth.c | 5 +++-- > > 1 file changed, 3 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/net/veth.c b/drivers/net/veth.c > > index ac7c0653695f..dfc7d87fad59 100644 > > --- a/drivers/net/veth.c > > +++ b/drivers/net/veth.c > > @@ -974,6 +974,9 @@ static int veth_poll(struct napi_struct *napi, int budget) > > xdp_set_return_frame_no_direct(); > > done = veth_xdp_rcv(rq, budget, &bq, &stats); > > > > + if (stats.xdp_redirect > 0) > > + xdp_do_flush(); > > + > > if (done < budget && napi_complete_done(napi, done)) { > > /* Write rx_notify_masked before reading ptr_ring */ > > smp_store_mb(rq->rx_notify_masked, false); > > @@ -987,8 +990,6 @@ static int veth_poll(struct napi_struct *napi, int budget) > > > > if (stats.xdp_tx > 0) > > veth_xdp_flush(rq, &bq); > > - if (stats.xdp_redirect > 0) > > - xdp_do_flush(); > > xdp_clear_return_frame_no_direct(); > > > > return done; >