On 03/25, Daniel Borkmann wrote: > On 03/19/2019 10:53 PM, Stanislav Fomichev wrote: > > When running stacktrace_build_id_nmi, try to query > > kernel.perf_event_max_sample_rate sysctl and use it as a sample_freq. > > If there was an error reading sysctl, fallback to 5000. > > > > kernel.perf_event_max_sample_rate sysctl can drift and/or can be > > adjusted by the perf tool, so assuming a fixed number might be > > problematic on a long running machine. > > > > Signed-off-by: Stanislav Fomichev <sdf@xxxxxxxxxx> > > Mostly trying to understand rationale a bit better in context of > selftests; perf_event_max_sample_rate could drift also after this > patch here, but I presume you are saying that the frequency we > request below would interfere too much with perf tool adjusted > one and thus affect whole rest of kernel also after selftests > finished running, so below would handle it more gracefully, right? Not really, the kernel would straight out reject out attempt to syscall(perf_event_open) when sample_freq >= kernel.perf_event_max_sample_rate sysctl: https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/tree/kernel/events/core.c#n10724 For this test, we don't really care about specific sample_freq, we just want our bpf prog to trigger at least once, so we can check the build-id. Maybe another way to fix it would be to convert to sample_period. Song, any specific reason you went with sample_freq and not sample_period in your original proposal? > > > --- > > .../bpf/prog_tests/stacktrace_build_id_nmi.c | 16 +++++++++++++++- > > 1 file changed, 15 insertions(+), 1 deletion(-) > > > > diff --git a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c > > index 8a114bb1c379..1c1a2f75f3d8 100644 > > --- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c > > +++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c > > @@ -1,13 +1,25 @@ > > // SPDX-License-Identifier: GPL-2.0 > > #include <test_progs.h> > > > > +static __u64 read_perf_max_sample_freq(void) > > +{ > > + __u64 sample_freq = 5000; /* fallback to 5000 on error */ > > + FILE *f; > > + > > + f = fopen("/proc/sys/kernel/perf_event_max_sample_rate", "r"); > > + if (f == NULL) > > + return sample_freq; > > + fscanf(f, "%llu", &sample_freq); > > + fclose(f); > > + return sample_freq; > > +} > > + > > void test_stacktrace_build_id_nmi(void) > > { > > int control_map_fd, stackid_hmap_fd, stackmap_fd, stack_amap_fd; > > const char *file = "./test_stacktrace_build_id.o"; > > int err, pmu_fd, prog_fd; > > struct perf_event_attr attr = { > > - .sample_freq = 5000, > > .freq = 1, > > .type = PERF_TYPE_HARDWARE, > > .config = PERF_COUNT_HW_CPU_CYCLES, > > @@ -20,6 +32,8 @@ void test_stacktrace_build_id_nmi(void) > > int build_id_matches = 0; > > int retry = 1; > > > > + attr.sample_freq = read_perf_max_sample_freq(); > > + > > retry: > > err = bpf_prog_load(file, BPF_PROG_TYPE_PERF_EVENT, &obj, &prog_fd); > > if (CHECK(err, "prog_load", "err %d errno %d\n", err, errno)) > > >