Re: [RFC PATCH] bpf: Prevent recursive deadlocks in BPF programs attached to spin lock helpers using fentry/ fexit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 30 Jan 2024 at 04:25, Jiri Olsa <olsajiri@xxxxxxxxx> wrote:
>
> On Wed, Jan 24, 2024 at 10:43:32AM -0500, Siddharth Chintamaneni wrote:
> > While we were working on some experiments with BPF trampoline, we came
> > across a deadlock scenario that could happen.
> >
> > A deadlock happens when two nested BPF programs tries to acquire the
> > same lock i.e, If a BPF program is attached using fexit to
> > bpf_spin_lock or using a fentry to bpf_spin_unlock, and it then
> > attempts to acquire the same lock as the previous BPF program, a
> > deadlock situation arises.
> >
> > Here is an example:
> >
> > SEC(fentry/bpf_spin_unlock)
> > int fentry_2{
> >   bpf_spin_lock(&x->lock);
> >   bpf_spin_unlock(&x->lock);
> > }
> >
> > SEC(fentry/xxx)
> > int fentry_1{
> >   bpf_spin_lock(&x->lock);
> >   bpf_spin_unlock(&x->lock);
> > }
>
> hi,
> looks like valid issue, could you add selftest for that?

Hello,
I have added selftest for the deadlock scenario.

>
> I wonder we could restrict just programs that use bpf_spin_lock/bpf_spin_unlock
> helpers? I'm not sure there's any useful use case for tracing spin lock helpers,
> but I think we should at least try this before we deny it completely
>

If we restrict programs (attached to spinlock helpers) that use
bpf_spin_lock/unlock helpers, there could be a scenario where a helper
function called within the program has a BPF program attached that
tries to acquire the same lock.

> >
> > To prevent these cases, a simple fix could be adding these helpers to
> > denylist in the verifier. This fix will prevent the BPF programs from
> > being loaded by the verifier.
> >
> > previously, a similar solution was proposed to prevent recursion.
> > https://lore.kernel.org/lkml/20230417154737.12740-2-laoar.shao@xxxxxxxxx/
>
> the difference is that __rcu_read_lock/__rcu_read_unlock are called unconditionally
> (always) when executing bpf tracing probe, the problem you described above is only
> for programs calling spin lock helpers (on same spin lock)
>
> >
> > Signed-off-by: Siddharth Chintamaneni <sidchintamaneni@xxxxxx>
> > ---
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 65f598694d55..8f1834f27f81 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -20617,6 +20617,10 @@ BTF_ID(func, preempt_count_sub)
> >  BTF_ID(func, __rcu_read_lock)
> >  BTF_ID(func, __rcu_read_unlock)
> >  #endif
> > +#if defined(CONFIG_DYNAMIC_FTRACE)
>
> why the CONFIG_DYNAMIC_FTRACE dependency?
As we described in the self-tests, nesting of multiple BPF programs
could only happen with fentry/fexit programs when DYNAMIC_FTRACE is
enabled. In other scenarios, when DYNAMIC_FTRACE is disabled, a BPF
program cannot be attached to any helper functions.
>
> jirka
>
> > +BTF_ID(func, bpf_spin_lock)
> > +BTF_ID(func, bpf_spin_unlock)
> > +#endif
> >  BTF_SET_END(btf_id_deny)
Signed-off-by: Siddharth Chintamaneni <sidchintamaneni@xxxxxx>
---
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 65f598694d55..ffc2515195f1 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -20617,6 +20617,10 @@ BTF_ID(func, preempt_count_sub)
 BTF_ID(func, __rcu_read_lock)
 BTF_ID(func, __rcu_read_unlock)
 #endif
+#ifdef CONFIG_DYNAMIC_FTRACE
+BTF_ID(func, bpf_spin_lock)
+BTF_ID(func, bpf_spin_unlock)
+#endif
 BTF_SET_END(btf_id_deny)

 static bool can_be_sleepable(struct bpf_prog *prog)
diff --git a/tools/testing/selftests/bpf/prog_tests/test_dead_lock.c
b/tools/testing/selftests/bpf/prog_tests/test_dead_lock.c
new file mode 100644
index 000000000000..8e2db654e963
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/test_dead_lock.c
@@ -0,0 +1,26 @@
+#include <test_progs.h>
+#include "test_dead_lock.skel.h"
+
+void test_dead_lock_fail(void){
+ struct test_dead_lock *skel;
+ int prog_fd;
+ int err;
+
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ skel = test_dead_lock__open_and_load();
+ if(!ASSERT_OK_PTR(skel, "test_dead_lock__open_and_load"))
+ goto end;
+
+ err = test_dead_lock__attach(skel);
+ if (!ASSERT_OK(err, "test_dead_lock_attach"))
+ goto end;
+
+ prog_fd = bpf_program__fd(skel->progs.dead_lock_test_main);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
+
+end:
+ test_dead_lock__destroy(skel);
+}
+
diff --git a/tools/testing/selftests/bpf/progs/test_dead_lock.c
b/tools/testing/selftests/bpf/progs/test_dead_lock.c
new file mode 100644
index 000000000000..72c6a0b033c9
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_dead_lock.c
@@ -0,0 +1,80 @@
+#include <linux/bpf.h>
+#include <linux/version.h>
+#include <bpf/bpf_helpers.h>
+
+struct hmap_elem {
+ int cnt;
+ struct bpf_spin_lock lock;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(max_entries, 1);
+ __type(key, int);
+ __type(value, struct hmap_elem);
+} hmap SEC(".maps");
+
+SEC("fexit/bpf_spin_lock")
+int dead_lock_test_inner1(void *ctx){
+
+ struct hmap_elem *val;
+ int key = 1;
+ int err = 0;
+
+ val = bpf_map_lookup_elem(&hmap, &key);
+ if (!val) {
+ goto err;
+ }
+
+ bpf_spin_lock(&val->lock);
+ val->cnt++;
+ bpf_spin_unlock(&val->lock);
+
+err:
+ return err;
+}
+
+SEC("fentry/bpf_spin_unlock")
+int dead_lock_test_inner2(void *ctx){
+
+ struct hmap_elem *val;
+ int key = 1;
+ int err = 0;
+
+ val = bpf_map_lookup_elem(&hmap, &key);
+ if (!val) {
+ goto err;
+ }
+
+ bpf_spin_lock(&val->lock);
+ val->cnt++;
+ bpf_spin_unlock(&val->lock);
+
+err:
+ return err;
+}
+
+SEC("fentry/bpf_fentry_test1")
+int dead_lock_test_main(void *ctx){
+
+ struct hmap_elem nval = {} ,*val;
+ int key = 1;
+ int err = 0;
+
+ val = bpf_map_lookup_elem(&hmap, &key);
+ if (!val) {
+ bpf_map_update_elem(&hmap, &key, &nval, 0);
+ val = bpf_map_lookup_elem(&hmap, &key);
+ if (!val) {
+ goto err;
+ }
+ }
+
+ bpf_spin_lock(&val->lock);
+ val->cnt++;
+ bpf_spin_unlock(&val->lock);
+err:
+ return err;
+}
+
+char _license[] SEC("license") = "GPL";




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux