On Fri, Jul 12, 2019 at 11:24 AM Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > On Fri, Jul 12, 2019 at 10:46 AM Ilya Leoshkevich <iii@xxxxxxxxxxxxx> wrote: > > > > Many s390 setups (most notably, KVM guests) do not have access to > > hardware performance events. > > > > Therefore, use the software event instead. > > > > Signed-off-by: Ilya Leoshkevich <iii@xxxxxxxxxxxxx> > > Acked-by: Vasily Gorbik <gor@xxxxxxxxxxxxx> > > --- > > tools/testing/selftests/bpf/prog_tests/send_signal.c | 9 +++++++++ > > 1 file changed, 9 insertions(+) > > > > diff --git a/tools/testing/selftests/bpf/prog_tests/send_signal.c b/tools/testing/selftests/bpf/prog_tests/send_signal.c > > index 67cea1686305..4a45ea0b8448 100644 > > --- a/tools/testing/selftests/bpf/prog_tests/send_signal.c > > +++ b/tools/testing/selftests/bpf/prog_tests/send_signal.c > > @@ -176,10 +176,19 @@ static int test_send_signal_tracepoint(void) > > static int test_send_signal_nmi(void) > > { > > struct perf_event_attr attr = { > > +#if defined(__s390__) > > + /* Many s390 setups (most notably, KVM guests) do not have > > + * access to hardware performance events. > > + */ > > + .sample_period = 1, > > + .type = PERF_TYPE_SOFTWARE, > > + .config = PERF_COUNT_SW_CPU_CLOCK, > > +#else > > Is there any harm in switching all archs to software event? I'd rather > avoid all those special arch cases, which will be really hard to test > for people without direct access to them. I still like to do hardware cpu_cycles in order to test nmi. In a physical box. $ perf list List of pre-defined events (to be used in -e): branch-instructions OR branches [Hardware event] branch-misses [Hardware event] bus-cycles [Hardware event] cache-misses [Hardware event] cache-references [Hardware event] cpu-cycles OR cycles [Hardware event] instructions [Hardware event] ref-cycles [Hardware event] alignment-faults [Software event] bpf-output [Software event] context-switches OR cs [Software event] cpu-clock [Software event] cpu-migrations OR migrations [Software event] dummy [Software event] emulation-faults [Software event] major-faults [Software event] minor-faults [Software event] page-faults OR faults [Software event] task-clock [Software event] L1-dcache-load-misses [Hardware cache event] ... In a VM $ perf list List of pre-defined events (to be used in -e): alignment-faults [Software event] bpf-output [Software event] context-switches OR cs [Software event] cpu-clock [Software event] cpu-migrations OR migrations [Software event] dummy [Software event] emulation-faults [Software event] major-faults [Software event] minor-faults [Software event] page-faults OR faults [Software event] task-clock [Software event] msr/smi/ [Kernel PMU event] msr/tsc/ [Kernel PMU event] ..... Is it possible that we detect at runtime whether the hardware cpu_cycles available or not? If available, let us do hardware one. Otherwise, skip or do the software one? The software one does not really do nmi so it will take the same code path in kernel as tracepoint. > > > .sample_freq = 50, > > .freq = 1, > > .type = PERF_TYPE_HARDWARE, > > .config = PERF_COUNT_HW_CPU_CYCLES, > > +#endif > > }; > > > > return test_send_signal_common(&attr, BPF_PROG_TYPE_PERF_EVENT, "perf_event"); > > -- > > 2.21.0 > >