Hi, On 3/6/2025 10:36 AM, Emil Tsalapatis wrote: > Hi, > > thank you for the feedback. I will address it in a v5. > > On Wed, Mar 5, 2025 at 8:57 PM Hou Tao <houtao@xxxxxxxxxxxxxxx> wrote: >> Hi, >> >> On 3/6/2025 5:12 AM, Emil Tsalapatis wrote: >>> Add selftests for the bpf_cpumask_fill helper that sets a bpf_cpumask to >>> a bit pattern provided by a BPF program. Just find out, the name of bpf_cpumask_fill() also needs update. >>> >>> Signed-off-by: Emil Tsalapatis (Meta) <emil@xxxxxxxxxxxxxxx> >>> --- >>> .../selftests/bpf/progs/cpumask_failure.c | 38 ++++++ >>> .../selftests/bpf/progs/cpumask_success.c | 114 ++++++++++++++++++ >>> 2 files changed, 152 insertions(+) >> My local build failed due to the missed declaration of >> "bpf_cpumask_populate" in cpumask_common.h. It reported the following error: >> >> progs/cpumask_success.c:788:8: error: call to undeclared function >> 'bpf_cpumask_populate'; ISO C99 and later do not support implicit fun >> ction declarations [-Wimplicit-function-declaration] >> 788 | ret = bpf_cpumask_populate((struct cpumask *)local, >> &toofewbits, sizeof(toofewbits)); >> >> Don't know the reason why CI succeeded. >> > Based on Alexei's email systems with recent pahole versions handle > this fine (at least the CI and my local setup), > I will still add the definition in cpumask_common.h for uniformity > since all the other kfuncs have one. I see. Thanks for that. > >>> diff --git a/tools/testing/selftests/bpf/progs/cpumask_failure.c b/tools/testing/selftests/bpf/progs/cpumask_failure.c >>> index b40b52548ffb..8a2fd596c8a3 100644 >>> --- a/tools/testing/selftests/bpf/progs/cpumask_failure.c >>> +++ b/tools/testing/selftests/bpf/progs/cpumask_failure.c >>> @@ -222,3 +222,41 @@ int BPF_PROG(test_invalid_nested_array, struct task_struct *task, u64 clone_flag >>> >>> return 0; >>> } >>> + >>> +SEC("tp_btf/task_newtask") >>> +__failure __msg("type=scalar expected=fp") >>> +int BPF_PROG(test_populate_invalid_destination, struct task_struct *task, u64 clone_flags) >>> +{ >>> + struct bpf_cpumask *invalid = (struct bpf_cpumask *)0x123456; >>> + u64 bits; >>> + int ret; >>> + >>> + ret = bpf_cpumask_populate((struct cpumask *)invalid, &bits, sizeof(bits)); >>> + if (!ret) >>> + err = 2; >>> + >>> + return 0; >>> +} >>> + SNIP >> An extra newline. >>> @@ -770,3 +771,116 @@ int BPF_PROG(test_refcount_null_tracking, struct task_struct *task, u64 clone_fl >>> bpf_cpumask_release(mask2); >>> return 0; >>> } >>> + >>> +SEC("tp_btf/task_newtask") >>> +__success >> For tp_btf, bpf_prog_test_run() doesn't run the prog and it just returns >> directly, therefore, the prog below is not exercised at all. How about >> add test_populate_reject_small_mask into cpumask_success_testcases >> firstly, then switch these test cases to use __success() in a following >> patch ? > Sorry about that, I had the selftests properly hooked into > prog_tests/cpumask.c until v3 but saw duplicate entries in the > selftest log > and thought it was being run twice. I will add them back in. > > Is __success() a different annotation? AFAICT __success is enough as > long as err is set to nonzero on an error path, and all > error paths are set like that in the selftests. In that case, > shouldn't adding the new tests cpumask_success_testcases be > enough to properly run the tests? Yes. __success() annotation is a bit different. It uses bpf_prog_test_run() to run the bpf prog directly instead of trigger the running of prog through an external event. I think adding new tests in cpumask_success_testcases will be enough. However, there is one success test test_refcount_null_tracking in cpumask_success.c which uses __success annotation, and it is still buggy. I think it would be better to switch all test cases to use __success annotation because the annotation provides much clarity. > > >>> +int BPF_PROG(test_populate_reject_small_mask, struct task_struct *task, u64 clone_flags) >>> +{ >>> + struct bpf_cpumask *local; >>> + u8 toofewbits; >>> + int ret; >>> + >>> + local = create_cpumask(); >>> + if (!local) >>> + return 0; >>> + >>> + /* The kfunc should prevent this operation */ >>> + ret = bpf_cpumask_populate((struct cpumask *)local, &toofewbits, sizeof(toofewbits)); >>> + if (ret != -EACCES) >>> + err = 2; >>> + >>> + bpf_cpumask_release(local); >>> + >>> + return 0; >>> +} >>> + >>> +/* Mask is guaranteed to be large enough for bpf_cpumask_t. */ >>> +#define CPUMASK_TEST_MASKLEN (sizeof(cpumask_t)) >>> + >>> +/* Add an extra word for the test_populate_reject_unaligned test. */ >>> +u64 bits[CPUMASK_TEST_MASKLEN / 8 + 1]; >>> +extern bool CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS __kconfig __weak; >>> + >>> +SEC("tp_btf/task_newtask") >>> +__success >> Same for test_populate_reject_unaligned. >>> +int BPF_PROG(test_populate_reject_unaligned, struct task_struct *task, u64 clone_flags) >>> +{ >>> + struct bpf_cpumask *mask; >>> + char *src; >>> + int ret; >>> + >>> + /* Skip if unaligned accesses are fine for this arch. */ >>> + if (CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) >>> + return 0; >>> + >>> + mask = bpf_cpumask_create(); >>> + if (!mask) { >>> + err = 1; >>> + return 0; >>> + } >>> + >>> + /* Misalign the source array by a byte. */ >>> + src = &((char *)bits)[1]; >>> + >>> + ret = bpf_cpumask_populate((struct cpumask *)mask, src, CPUMASK_TEST_MASKLEN); >>> + if (ret != -EINVAL) >>> + err = 2; >>> + >>> + bpf_cpumask_release(mask); >>> + >>> + return 0; >>> +} >>> + >>> + >>> +SEC("tp_btf/task_newtask") >>> +__success >>> +int BPF_PROG(test_populate, struct task_struct *task, u64 clone_flags) >>> +{ >>> + struct bpf_cpumask *mask; >>> + bool bit; >>> + int ret; >>> + int i; >>> + >>> + /* Set only odd bits. */ >>> + __builtin_memset(bits, 0xaa, CPUMASK_TEST_MASKLEN); >>> + >>> + mask = bpf_cpumask_create(); >>> + if (!mask) { >>> + err = 1; >>> + return 0; >>> + } >>> + >>> + /* Pass the entire bits array, the kfunc will only copy the valid bits. */ >>> + ret = bpf_cpumask_populate((struct cpumask *)mask, bits, CPUMASK_TEST_MASKLEN); >>> + if (ret) { >>> + err = 2; >>> + goto out; >>> + } >>> + >>> + /* >>> + * Test is there to appease the verifier. We cannot directly >>> + * access NR_CPUS, the upper bound for nr_cpus, so we infer >>> + * it from the size of cpumask_t. >>> + */ >>> + if (nr_cpus < 0 || nr_cpus >= CPUMASK_TEST_MASKLEN * 8) { >>> + err = 3; >>> + goto out; >>> + } >>> + >>> + bpf_for(i, 0, nr_cpus) { >>> + /* Odd-numbered bits should be set, even ones unset. */ >>> + bit = bpf_cpumask_test_cpu(i, (const struct cpumask *)mask); >>> + if (bit == (i % 2 != 0)) >>> + continue; >>> + >>> + err = 4; >>> + break; >>> + } >>> + >>> +out: >>> + bpf_cpumask_release(mask); >>> + >>> + return 0; >>> +} >>> + >>> +#undef CPUMASK_TEST_MASKLEN