On Wed, Mar 31, 2021 at 9:45 AM Yauheni Kaliuta <yauheni.kaliuta@xxxxxxxxxx> wrote: > > Set bpf table sizes dynamically according to the runtime page size > value. > > Do not switch to ASSERT macros, keep CHECK, for consistency with the > rest of the test. Can be a separate cleanup patch. > > Signed-off-by: Yauheni Kaliuta <yauheni.kaliuta@xxxxxxxxxx> > --- > .../selftests/bpf/prog_tests/ringbuf_multi.c | 23 ++++++++++++++++--- > .../selftests/bpf/progs/test_ringbuf_multi.c | 1 - > 2 files changed, 20 insertions(+), 4 deletions(-) > > diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c b/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c > index d37161e59bb2..159de99621c7 100644 > --- a/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c > +++ b/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c > @@ -41,13 +41,30 @@ static int process_sample(void *ctx, void *data, size_t len) > void test_ringbuf_multi(void) > { > struct test_ringbuf_multi *skel; > - struct ring_buffer *ringbuf; > + struct ring_buffer *ringbuf = NULL; > int err; > + int page_size = getpagesize(); > > - skel = test_ringbuf_multi__open_and_load(); > - if (CHECK(!skel, "skel_open_load", "skeleton open&load failed\n")) > + skel = test_ringbuf_multi__open(); > + if (CHECK(!skel, "skel_open", "skeleton open failed\n")) > return; > > + err = bpf_map__set_max_entries(skel->maps.ringbuf1, page_size); > + if (CHECK(err != 0, "bpf_map__set_max_entries", "bpf_map__set_max_entries failed\n")) > + goto cleanup; > + > + err = bpf_map__set_max_entries(skel->maps.ringbuf2, page_size); > + if (CHECK(err != 0, "bpf_map__set_max_entries", "bpf_map__set_max_entries failed\n")) > + goto cleanup; > + > + err = bpf_map__set_max_entries(bpf_map__inner_map(skel->maps.ringbuf_arr), page_size); > + if (CHECK(err != 0, "bpf_map__set_max_entries", "bpf_map__set_max_entries failed\n")) > + goto cleanup; > + > + err = test_ringbuf_multi__load(skel); > + if (CHECK(err != 0, "skel_load", "skeleton load failed\n")) > + goto cleanup; > + To test bpf_map__set_inner_map_fd() interaction with map-in-map initialization, can you extend the test to have another map-in-map (could be HASHMAP, just for fun), which is initialized with either ringbuf1 or ringbuf2, but then from user-space use a different way to override inner map definition: int proto_fd = bpf_create_map(... RINGBUF of page_size ...); bpf_map__set_inner_map_fd(skel->maps.ringbuf_hash, proto_fd); close(proto_fd); /* perform load, it should succeed */ Important is to use a different map-in-map from ringbuf_arr, so that load fails, unless set_inner_map_fd() properly updates internals of a map. > /* only trigger BPF program for current process */ > skel->bss->pid = getpid(); > > diff --git a/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c > index edf3b6953533..055c10b2ff80 100644 > --- a/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c > +++ b/tools/testing/selftests/bpf/progs/test_ringbuf_multi.c > @@ -15,7 +15,6 @@ struct sample { > > struct ringbuf_map { > __uint(type, BPF_MAP_TYPE_RINGBUF); > - __uint(max_entries, 1 << 12); > } ringbuf1 SEC(".maps"), > ringbuf2 SEC(".maps"); > > -- > 2.31.1 >