On Fri, Jun 17, 2022 at 07:56:17PM -0700, John Fastabend wrote: > Maciej Fijalkowski wrote: > > Some of the drivers that implement support for AF_XDP Zero Copy (like > > ice) can have lazy approach for cleaning Tx descriptors. For ZC, when > > descriptor is cleaned, it is placed onto AF_XDP completion queue. This > > means that current implementation of wait_for_tx_completion() in > > xdpxceiver can get onto infinite loop, as some of the descriptors can > > never reach CQ. > > > > This function can be changed to rely on pkts_in_flight instead. > > > > Acked-by: Magnus Karlsson <magnus.karlsson@xxxxxxxxx> > > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@xxxxxxxxx> > > --- > > Sorry I'm going to need more details to follow whats going on here. > > In send_pkts() we do the expected thing and send all the pkts and > then call wait_for_tx_completion(). > > Wait for completion is obvious, > > static void wait_for_tx_completion(struct xsk_socket_info *xsk) > { > while (xsk->outstanding_tx) > complete_pkts(xsk, BATCH_SIZE); > } > > the 'outstanding_tx' counter appears to be decremented in complete_pkts(). > This is done by looking at xdk_ring_cons__peek() makes sense to me until > it shows up here we don't know the pkt has been completely sent and > can release the resources. This is necessary for scenarios like l2fwd in xdpsock where you would be taking entries from cq back to fq to refill the rx hw queue and keep going with the flow. > > Now if you just zero it on exit and call it good how do you know the > resources are safe to clean up? Or that you don't have a real bug > in the driver that isn't correctly releasing the resource. xdpxceiver spawns two threads one for tx and one for rx. from rx thread POV if receive_pkts() ended its job then this implies that tx thread transmitted all of the frames that rx thread expected to receive. this zeroing is then only to terminate the tx thread and finish the current test case so that further cases under the current mode can be executed. > > How are users expected to use a lazy approach to tx descriptor cleaning > in this case e.g. on exit like in this case. It seems we need to > fix the root cause of ice not putting things on the completion queue > or I misunderstood the patch. ice puts things on cq lazily on purpose as we added batching to Tx side where we clean descs only when it's needed. We need to exit spawned threads before we detach socket from interface. Socket detach is done from main thread and in that case driver goes through tx ring and places descriptors that are left to completion queue. > > > > tools/testing/selftests/bpf/xdpxceiver.c | 3 ++- > > tools/testing/selftests/bpf/xdpxceiver.h | 2 +- > > 2 files changed, 3 insertions(+), 2 deletions(-) > > > > diff --git a/tools/testing/selftests/bpf/xdpxceiver.c b/tools/testing/selftests/bpf/xdpxceiver.c > > index de4cf0432243..13a3b2ac2399 100644 > > --- a/tools/testing/selftests/bpf/xdpxceiver.c > > +++ b/tools/testing/selftests/bpf/xdpxceiver.c > > @@ -965,7 +965,7 @@ static int __send_pkts(struct ifobject *ifobject, u32 *pkt_nb) > > > > static void wait_for_tx_completion(struct xsk_socket_info *xsk) > > { > > - while (xsk->outstanding_tx) > > + while (pkts_in_flight) > > complete_pkts(xsk, BATCH_SIZE); > > } > > > > @@ -1269,6 +1269,7 @@ static void *worker_testapp_validate_rx(void *arg) > > pthread_mutex_unlock(&pacing_mutex); > > } > > > > + pkts_in_flight = 0; > > pthread_exit(NULL); > > }