Magnus Karlsson <magnus.karlsson@xxxxxxxxx> writes: > On Wed, Jan 11, 2023 at 3:21 PM Toke Høiland-Jørgensen <toke@xxxxxxxxxx> wrote: >> >> Magnus Karlsson <magnus.karlsson@xxxxxxxxx> writes: >> >> > On Thu, Dec 22, 2022 at 11:18 AM Paolo Abeni <pabeni@xxxxxxxxxx> wrote: >> >> >> >> On Tue, 2022-12-20 at 12:59 -0600, Shawn Bohrer wrote: >> >> > When AF_XDP is used on on a veth interface the RX ring is updated in two >> >> > steps. veth_xdp_rcv() removes packet descriptors from the FILL ring >> >> > fills them and places them in the RX ring updating the cached_prod >> >> > pointer. Later xdp_do_flush() syncs the RX ring prod pointer with the >> >> > cached_prod pointer allowing user-space to see the recently filled in >> >> > descriptors. The rings are intended to be SPSC, however the existing >> >> > order in veth_poll allows the xdp_do_flush() to run concurrently with >> >> > another CPU creating a race condition that allows user-space to see old >> >> > or uninitialized descriptors in the RX ring. This bug has been observed >> >> > in production systems. >> >> > >> >> > To summarize, we are expecting this ordering: >> >> > >> >> > CPU 0 __xsk_rcv_zc() >> >> > CPU 0 __xsk_map_flush() >> >> > CPU 2 __xsk_rcv_zc() >> >> > CPU 2 __xsk_map_flush() >> >> > >> >> > But we are seeing this order: >> >> > >> >> > CPU 0 __xsk_rcv_zc() >> >> > CPU 2 __xsk_rcv_zc() >> >> > CPU 0 __xsk_map_flush() >> >> > CPU 2 __xsk_map_flush() >> >> > >> >> > This occurs because we rely on NAPI to ensure that only one napi_poll >> >> > handler is running at a time for the given veth receive queue. >> >> > napi_schedule_prep() will prevent multiple instances from getting >> >> > scheduled. However calling napi_complete_done() signals that this >> >> > napi_poll is complete and allows subsequent calls to >> >> > napi_schedule_prep() and __napi_schedule() to succeed in scheduling a >> >> > concurrent napi_poll before the xdp_do_flush() has been called. For the >> >> > veth driver a concurrent call to napi_schedule_prep() and >> >> > __napi_schedule() can occur on a different CPU because the veth xmit >> >> > path can additionally schedule a napi_poll creating the race. >> >> >> >> The above looks like a generic problem that other drivers could hit. >> >> Perhaps it could be worthy updating the xdp_do_flush() doc text to >> >> explicitly mention it must be called before napi_complete_done(). >> > >> > Good observation. I took a quick peek at this and it seems there are >> > at least 5 more drivers that can call napi_complete_done() before >> > xdp_do_flush(): >> > >> > drivers/net/ethernet/qlogic/qede/ >> > drivers/net/ethernet/freescale/dpaa2 >> > drivers/net/ethernet/freescale/dpaa >> > drivers/net/ethernet/microchip/lan966x >> > drivers/net/virtio_net.c >> > >> > The question is then if this race can occur on these five drivers. >> > Dpaa2 has AF_XDP zero-copy support implemented, so it can suffer from >> > this race as the Tx zero-copy path is basically just a napi_schedule() >> > and it can be called/invoked from multiple processes at the same time. >> > In regards to the others, I do not know. >> > >> > Would it be prudent to just switch the order of xdp_do_flush() and >> > napi_complete_done() in all these drivers, or would that be too >> > defensive? >> >> We rely on being inside a single NAPI instance trough to the >> xdp_do_flush() call for RCU protection of all in-kernel data structures >> as well[0]. Not sure if this leads to actual real-world bugs for the >> in-kernel path, but conceptually it's wrong at least. So yeah, I think >> we should definitely swap the order everywhere and document this! > > OK, let me take a stab at it. For at least the first four, it will be > compilation tested only from my side since I do not own any of those > SoCs/cards. Will need the help of those maintainers for sure. Sounds good, thanks! :) -Toke