Patch "selftests/bpf: fix bpf_loop_bench for new callback verification scheme" has been added to the 6.1-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    selftests/bpf: fix bpf_loop_bench for new callback verification scheme

to the 6.1-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     selftests-bpf-fix-bpf_loop_bench-for-new-callback-ve.patch
and it can be found in the queue-6.1 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit ec5d285061a83c0db7d84093e108cbe670ee458e
Author: Eduard Zingerman <eddyz87@xxxxxxxxx>
Date:   Tue Nov 21 04:06:53 2023 +0200

    selftests/bpf: fix bpf_loop_bench for new callback verification scheme
    
    [ Upstream commit f40bfd1679446b22d321e64a1fa98b7d07d2be08 ]
    
    This is a preparatory change. A follow-up patch "bpf: verify callbacks
    as if they are called unknown number of times" changes logic for
    callbacks handling. While previously callbacks were verified as a
    single function call, new scheme takes into account that callbacks
    could be executed unknown number of times.
    
    This has dire implications for bpf_loop_bench:
    
        SEC("fentry/" SYS_PREFIX "sys_getpgid")
        int benchmark(void *ctx)
        {
                for (int i = 0; i < 1000; i++) {
                        bpf_loop(nr_loops, empty_callback, NULL, 0);
                        __sync_add_and_fetch(&hits, nr_loops);
                }
                return 0;
        }
    
    W/o callbacks change verifier sees it as a 1000 calls to
    empty_callback(). However, with callbacks change things become
    exponential:
    - i=0: state exploring empty_callback is scheduled with i=0 (a);
    - i=1: state exploring empty_callback is scheduled with i=1;
      ...
    - i=999: state exploring empty_callback is scheduled with i=999;
    - state (a) is popped from stack;
    - i=1: state exploring empty_callback is scheduled with i=1;
      ...
    
    Avoid this issue by rewriting outer loop as bpf_loop().
    Unfortunately, this adds a function call to a loop at runtime, which
    negatively affects performance:
    
                throughput               latency
       before:  149.919 ± 0.168 M ops/s, 6.670 ns/op
       after :  137.040 ± 0.187 M ops/s, 7.297 ns/op
    
    Acked-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
    Signed-off-by: Eduard Zingerman <eddyz87@xxxxxxxxx>
    Link: https://lore.kernel.org/r/20231121020701.26440-4-eddyz87@xxxxxxxxx
    Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/tools/testing/selftests/bpf/progs/bpf_loop_bench.c b/tools/testing/selftests/bpf/progs/bpf_loop_bench.c
index 4ce76eb064c41..d461746fd3c1e 100644
--- a/tools/testing/selftests/bpf/progs/bpf_loop_bench.c
+++ b/tools/testing/selftests/bpf/progs/bpf_loop_bench.c
@@ -15,13 +15,16 @@ static int empty_callback(__u32 index, void *data)
 	return 0;
 }
 
+static int outer_loop(__u32 index, void *data)
+{
+	bpf_loop(nr_loops, empty_callback, NULL, 0);
+	__sync_add_and_fetch(&hits, nr_loops);
+	return 0;
+}
+
 SEC("fentry/" SYS_PREFIX "sys_getpgid")
 int benchmark(void *ctx)
 {
-	for (int i = 0; i < 1000; i++) {
-		bpf_loop(nr_loops, empty_callback, NULL, 0);
-
-		__sync_add_and_fetch(&hits, nr_loops);
-	}
+	bpf_loop(1000, outer_loop, NULL, 0);
 	return 0;
 }




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux