On Mon, 23 Oct 2023 at 14:57, Albert Huang <huangjie.albert@xxxxxxxxxxxxx> wrote: > > In the previous implementation, when multiple xsk sockets were > associated with a single xsk_buff_pool, a situation could arise > where the xsk_tx_list maintained data at the front for one xsk > socket while starving the xsk sockets at the back of the list. > This could result in issues such as the inability to transmit packets, > increased latency, and jitter. To address this problem, we introduce > a new variable called tx_budget_spent, which limits each xsk to transmit > a maximum of MAX_PER_SOCKET_BUDGET tx descriptors. This allocation ensures > equitable opportunities for subsequent xsk sockets to send tx descriptors. > The value of MAX_PER_SOCKET_BUDGET is set to 32. Thank you Albert for implementing this feature! Acked-by: Magnus Karlsson <magnus.karlsson@xxxxxxxxx> > Signed-off-by: Albert Huang <huangjie.albert@xxxxxxxxxxxxx> > --- > include/net/xdp_sock.h | 7 +++++++ > net/xdp/xsk.c | 18 ++++++++++++++++++ > 2 files changed, 25 insertions(+) > > diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h > index 69b472604b86..de6819e50d54 100644 > --- a/include/net/xdp_sock.h > +++ b/include/net/xdp_sock.h > @@ -63,6 +63,13 @@ struct xdp_sock { > > struct xsk_queue *tx ____cacheline_aligned_in_smp; > struct list_head tx_list; > + /* record the number of tx descriptors sent by this xsk and > + * when it exceeds MAX_PER_SOCKET_BUDGET, an opportunity needs > + * to be given to other xsks for sending tx descriptors, thereby > + * preventing other XSKs from being starved. > + */ > + u32 tx_budget_spent; > + > /* Protects generic receive. */ > spinlock_t rx_lock; > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c > index f5e96e0d6e01..65c32b85c326 100644 > --- a/net/xdp/xsk.c > +++ b/net/xdp/xsk.c > @@ -33,6 +33,7 @@ > #include "xsk.h" > > #define TX_BATCH_SIZE 32 > +#define MAX_PER_SOCKET_BUDGET (TX_BATCH_SIZE) > > static DEFINE_PER_CPU(struct list_head, xskmap_flush_list); > > @@ -413,16 +414,25 @@ EXPORT_SYMBOL(xsk_tx_release); > > bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc) > { > + bool budget_exhausted = false; > struct xdp_sock *xs; > > rcu_read_lock(); > +again: > list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) { > + if (xs->tx_budget_spent >= MAX_PER_SOCKET_BUDGET) { > + budget_exhausted = true; > + continue; > + } > + > if (!xskq_cons_peek_desc(xs->tx, desc, pool)) { > if (xskq_has_descs(xs->tx)) > xskq_cons_release(xs->tx); > continue; > } > > + xs->tx_budget_spent++; > + > /* This is the backpressure mechanism for the Tx path. > * Reserve space in the completion queue and only proceed > * if there is space in it. This avoids having to implement > @@ -436,6 +446,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc) > return true; > } > > + if (budget_exhausted) { > + list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) > + xs->tx_budget_spent = 0; > + > + budget_exhausted = false; > + goto again; > + } > + > out: > rcu_read_unlock(); > return false; > -- > 2.20.1 > >