On 08/03, Martin KaFai Lau wrote:
Most of the code in bpf_setsockopt(SOL_SOCKET) are duplicated from the sk_setsockopt(). The number of supported optnames are increasing ever and so as the duplicated code.
One issue in reusing sk_setsockopt() is that the bpf prog has already acquired the sk lock. This patch adds a in_bpf() to tell if the sk_setsockopt() is called from a bpf prog. The bpf prog calling bpf_setsockopt() is either running in_task() or in_serving_softirq(). Both cases have the current->bpf_ctx initialized. Thus, the in_bpf() only needs to test !!current->bpf_ctx.
This patch also adds sockopt_{lock,release}_sock() helpers for sk_setsockopt() to use. These helpers will test in_bpf() before acquiring/releasing the lock. They are in EXPORT_SYMBOL for the ipv6 module to use in a latter patch.
Note on the change in sock_setbindtodevice(). sockopt_lock_sock() is done in sock_setbindtodevice() instead of doing the lock_sock in sock_bindtoindex(..., lock_sk = true).
Signed-off-by: Martin KaFai Lau <kafai@xxxxxx> --- include/linux/bpf.h | 8 ++++++++ include/net/sock.h | 3 +++ net/core/sock.c | 26 +++++++++++++++++++++++--- 3 files changed, 34 insertions(+), 3 deletions(-)
diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 20c26aed7896..b905b1b34fe4 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -1966,6 +1966,10 @@ static inline bool unprivileged_ebpf_enabled(void) return !sysctl_unprivileged_bpf_disabled; }
+static inline bool in_bpf(void) +{ + return !!current->bpf_ctx; +}
Good point on not needing to care about softirq! That actually turned even nicer :-) QQ: do we need to add a comment here about potential false-negatives? I see you're adding ctx to the iter, but there is still a bunch of places that don't use it.