On Mon, Nov 2, 2020 at 3:41 AM David Verbeiren <david.verbeiren@xxxxxxxxxxxx> wrote: > > On Thu, Oct 29, 2020 at 11:37 PM Andrii Nakryiko > <andrii.nakryiko@xxxxxxxxx> wrote: > > > > On Thu, Oct 29, 2020 at 11:36 AM Song Liu <song@xxxxxxxxxx> wrote: > > > > > > On Thu, Oct 29, 2020 at 4:19 AM David Verbeiren > > > <david.verbeiren@xxxxxxxxxxxx> wrote: > > > > > > > > Tests that when per-cpu hash map or LRU hash map elements are > > > > re-used as a result of a bpf program inserting elements, the > > > > element values for the other CPUs than the one executing the > > > > BPF code are reset to 0. > > > > > > > > This validates the fix proposed in: > > > > https://lkml.kernel.org/bpf/20201027221324.27894-1-david.verbeiren@xxxxxxxxxxxx/ > [...] > > > > --- > > > > + > > > > +/* executes bpf program that updates map with key, value */ > > > > +static int bpf_prog_insert_elem(int fd, map_key_t key, map_value_t value) > > > > +{ > > > > + struct bpf_load_program_attr prog; > > > > + struct bpf_insn insns[] = { > > > > + BPF_LD_IMM64(BPF_REG_8, key), > > > > + BPF_LD_IMM64(BPF_REG_9, value), > > > > + > > > > + /* update: R1=fd, R2=&key, R3=&value, R4=flags */ > > > > + BPF_LD_MAP_FD(BPF_REG_1, fd), > > > > + BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), > > > > + BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), > > > > + BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_8, 0), > > > > + BPF_MOV64_REG(BPF_REG_3, BPF_REG_2), > > > > + BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -8), > > > > + BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_9, 0), > > > > + BPF_MOV64_IMM(BPF_REG_4, 0), > > > > + BPF_EMIT_CALL(BPF_FUNC_map_update_elem), > > > > + > > > > + BPF_MOV64_IMM(BPF_REG_0, 0), > > > > + BPF_EXIT_INSN(), > > > > + }; > > > > > > Impressive hand written assembly. ;-) I would recommend using skeleton > > > for future work. For example: > > > > > > BPF program: selftests/bpf/progs/bpf_iter_bpf_map.c > > > Use the program in tests: > > > selftests/bpf/prog_tests/bpf_iter.c:#include "bpf_iter_bpf_map.skel.h" > > > > > > > Let's keep a manually-constructed assembly to test_verifier tests only. > > > > David, please also check progs/test_endian.c and prog_tests/endian.c > > as one of the most minimal self-tests with no added complexity, but > > complete end-to-end setup. > > Thanks for the suggestion, Andrii. I tried using the same simple setup > as prog_tests/endian.c but unfortunately when using sys_enter > tracepoint, the bpf program runs several times, on various cpus. > This invalidates the check in userspace to verify that the value was > updated for only one cpu and was initialized to 0 for the other ones. > I tried to change the bpf program so it would only run once but I bumped > into the limitation that the return value of __sync_fetch_annd_add() > (and family) cannot be used. Any suggestion for this? Can I combine > skeleton with bpf_prog_test_run()? Replied to the new version of the patch. You can use a bit more selective tracepoint (so the test won't accidentally trigger it multiple times) and filter on thread ID. And yes, of course you can use bpf_prog_test_run() with skeleton. In the end it's all about FDs, which you easily get from skeleton.