On 25/07/2022 12.56, Lorenzo Bianconi wrote:
Report rx queue index in xdp_frame according to the xdp_buff xdp_rxq_info
pointer. xdp_frame queue_index is currently used in cpumap code to covert
the xdp_frame into a xdp_buff.
Hmm, I'm unsure about this change, because the XDP-hints will also
contain the rx_queue number.
I do think it is relevant for the BPF-prog to get access to the rx_queue
index, because it can be used for scaling the workload.
xdp_frame size is not increased adding queue_index since an alignment padding
in the structure is used to insert queue_index field.
The rx_queue could be reduced from u32 to u16, but it might be faster to
keep it u32, and reduce it when others need the space.
Signed-off-by: Lorenzo Bianconi <lorenzo@xxxxxxxxxx>
---
include/net/xdp.h | 2 ++
kernel/bpf/cpumap.c | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/net/xdp.h b/include/net/xdp.h
index 04c852c7a77f..3567866b0af5 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -172,6 +172,7 @@ struct xdp_frame {
struct xdp_mem_info mem;
struct net_device *dev_rx; /* used by cpumap */
u32 flags; /* supported values defined in xdp_buff_flags */
+ u32 queue_index;
};
static __always_inline bool xdp_frame_has_frags(struct xdp_frame *frame)
@@ -301,6 +302,7 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp)
/* rxq only valid until napi_schedule ends, convert to xdp_mem_info */
xdp_frame->mem = xdp->rxq->mem;
+ xdp_frame->queue_index = xdp->rxq->queue_index;
return xdp_frame;
}
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index f4860ac756cd..09a792d088b3 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -228,7 +228,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
rxq.dev = xdpf->dev_rx;
rxq.mem = xdpf->mem;
- /* TODO: report queue_index to xdp_rxq_info */
+ rxq.queue_index = xdpf->queue_index;
xdp_convert_frame_to_buff(xdpf, &xdp);