On 6/23/21 1:07 PM, Toke Høiland-Jørgensen wrote:
XDP_REDIRECT works by a three-step process: the bpf_redirect() and
bpf_redirect_map() helpers will lookup the target of the redirect and store
it (along with some other metadata) in a per-CPU struct bpf_redirect_info.
Next, when the program returns the XDP_REDIRECT return code, the driver
will call xdp_do_redirect() which will use the information thus stored to
actually enqueue the frame into a bulk queue structure (that differs
slightly by map type, but shares the same principle). Finally, before
exiting its NAPI poll loop, the driver will call xdp_do_flush(), which will
flush all the different bulk queues, thus completing the redirect.
[...]
Signed-off-by: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>
[...]
diff --git a/include/linux/filter.h b/include/linux/filter.h
index c5ad7df029ed..b01e266dad9e 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -762,12 +762,10 @@ DECLARE_BPF_DISPATCHER(xdp)
static __always_inline u32 bpf_prog_run_xdp(const struct bpf_prog *prog,
struct xdp_buff *xdp)
-{
- /* Caller needs to hold rcu_read_lock() (!), otherwise program
- * can be released while still running, or map elements could be
- * freed early while still having concurrent users. XDP fastpath
- * already takes rcu_read_lock() when fetching the program, so
- * it's not necessary here anymore.
+
+ /* Driver XDP hooks are invoked within a single NAPI poll cycle and thus
+ * under local_bh_disable(), which provides the needed RCU protection
+ * for accessing map entries.
*/
return __BPF_PROG_RUN(prog, xdp, BPF_DISPATCHER_FUNC(xdp));
}
I just went over the series to manually fix up merge conflicts in the driver
patches since they didn't apply cleanly against bpf-next.
But as it turned out that extra work was needless, since you didn't even compile
test the series before submission, sigh.
Please fix (and only submit compile- & runtime-tested code in future).
Thanks,
Daniel