Re: [PATCH bpf-next v1 7/8] selftests/bpf: Add selftests for load-acquire and store-release instructions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2025-01-25 at 02:19 +0000, Peilin Ye wrote:
> Add several ./test_progs tests:
> 
>   - atomics/load_acquire
>   - atomics/store_release
>   - arena_atomics/load_acquire
>   - arena_atomics/store_release
>   - verifier_load_acquire/*
>   - verifier_store_release/*
>   - verifier_precision/bpf_load_acquire
>   - verifier_precision/bpf_store_release
> 
> The last two tests are added to check if backtrack_insn() handles the
> new instructions correctly.
> 
> Additionally, the last test also makes sure that the verifier
> "remembers" the value (in src_reg) we store-release into e.g. a stack
> slot.  For example, if we take a look at the test program:
> 
>     #0:  "r1 = 8;"
>     #1:  "store_release((u64 *)(r10 - 8), r1);"
>     #2:  "r1 = *(u64 *)(r10 - 8);"
>     #3:  "r2 = r10;"
>     #4:  "r2 += r1;"	/* mark_precise */
>     #5:  "r0 = 0;"
>     #6:  "exit;"
> 
> At #1, if the verifier doesn't remember that we wrote 8 to the stack,
> then later at #4 we would be adding an unbounded scalar value to the
> stack pointer, which would cause the program to be rejected:
> 
>   VERIFIER LOG:
>   =============
> ...
>   math between fp pointer and register with unbounded min value is not allowed
> 
> All new tests depend on the pre-defined __BPF_FEATURE_LOAD_ACQ_STORE_REL
> feature macro, which implies -mcpu>=v4.

This restriction would mean that tests are skipped on BPF CI, as it
currently runs using llvm 17 and 18. Instead, I suggest using some
macro hiding an inline assembly as below:

	asm volatile (".8byte %[insn];"
	              :
	              : [insn]"i"(*(long *)&(BPF_RAW_INSN(...)))
	              : /* correct clobbers here */);

See the usage of the __imm_insn() macro in the test suite.

Also, "BPF_ATOMIC loads from R%d %s is not allowed\n" and
      "BPF_ATOMIC stores into R%d %s is not allowed\n"
situations are not tested.

[...]

> diff --git a/tools/testing/selftests/bpf/prog_tests/atomics.c b/tools/testing/selftests/bpf/prog_tests/atomics.c
> index 13e101f370a1..5d7cff3eed2b 100644
> --- a/tools/testing/selftests/bpf/prog_tests/atomics.c
> +++ b/tools/testing/selftests/bpf/prog_tests/atomics.c
> @@ -162,6 +162,56 @@ static void test_xchg(struct atomics_lskel *skel)
>  	ASSERT_EQ(skel->bss->xchg32_result, 1, "xchg32_result");
>  }

Nit: Given the tests in verifier_load_acquire.c and verifier_store_release.c
     that use __retval annotation, are these tests really necessary?
     (assuming that verifier_store_release.c tests are modified to read
      stored location into r0 before exit).

> +static void test_load_acquire(struct atomics_lskel *skel)
> +{
> +	LIBBPF_OPTS(bpf_test_run_opts, topts);
> +	int err, prog_fd;
> +
> +	if (skel->data->skip_lacq_srel_tests) {
> +		printf("%s:SKIP:Clang does not support BPF load-acquire\n", __func__);
> +		test__skip();
> +		return;
> +	}
> +
> +	/* No need to attach it, just run it directly */
> +	prog_fd = skel->progs.load_acquire.prog_fd;
> +	err = bpf_prog_test_run_opts(prog_fd, &topts);
> +	if (!ASSERT_OK(err, "test_run_opts err"))
> +		return;
> +	if (!ASSERT_OK(topts.retval, "test_run_opts retval"))
> +		return;
> +
> +	ASSERT_EQ(skel->bss->load_acquire8_result, 0x12, "load_acquire8_result");
> +	ASSERT_EQ(skel->bss->load_acquire16_result, 0x1234, "load_acquire16_result");
> +	ASSERT_EQ(skel->bss->load_acquire32_result, 0x12345678, "load_acquire32_result");
> +	ASSERT_EQ(skel->bss->load_acquire64_result, 0x1234567890abcdef, "load_acquire64_result");
> +}

[...]

> --- a/tools/testing/selftests/bpf/progs/arena_atomics.c
> +++ b/tools/testing/selftests/bpf/progs/arena_atomics.c
[...]

> +SEC("raw_tp/sys_enter")
> +int load_acquire(const void *ctx)
> +{
> +	if (pid != (bpf_get_current_pid_tgid() >> 32))
> +		return 0;

Nit: This check is not needed, since bpf_prog_test_run_opts() is used
     to run the tests.

> +
> +#ifdef __BPF_FEATURE_LOAD_ACQ_STORE_REL
> +	load_acquire8_result = __atomic_load_n(&load_acquire8_value, __ATOMIC_ACQUIRE);
> +	load_acquire16_result = __atomic_load_n(&load_acquire16_value, __ATOMIC_ACQUIRE);
> +	load_acquire32_result = __atomic_load_n(&load_acquire32_value, __ATOMIC_ACQUIRE);
> +	load_acquire64_result = __atomic_load_n(&load_acquire64_value, __ATOMIC_ACQUIRE);
> +#endif
> +
> +	return 0;
> +}

[...]






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux