The topdown metrics events became default since commit 42641d6f4d15 ("perf stat: Add Topdown metrics events as default events"). The perf will use 'slots' if the /sys/bus/event_source/devices/cpu/events/slots is available. Unfortunately, the 'slots' may not be supported in the virualization environment. The hypervisor may not expose the 'slots' counter to the VM in cpuid. As a result, the kernel may disable topdown slots and metrics events in intel_pmu_init() if 'slots' is not in CPUID. E.g., both c->weight and c->idxmsk64 are set to 0. There will be below error on Icelake VM when the 'slots' is the leader , but to create the event is failed. $ perf stat Error: The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (slots). /bin/dmesg | grep -i perf may provide additional information. This is because the stat_handle_error() returns COUNTER_FATAL when the 'slots' is used as leader of events. While the issue will be fixed at kernel side by hiding 'slots' sysfs entries, this is to fix at perf userspace. The event is regarded as not supported if its leader is not supported. The userspace fix changes the way to report error when the leader event is not supported. Cc: Like Xu <like.xu.linux@xxxxxxxxx> Cc: Kan Liang <kan.liang@xxxxxxxxxxxxxxx> Cc: Joe Jin <joe.jin@xxxxxxxxxx> Signed-off-by: Dongli Zhang <dongli.zhang@xxxxxxxxxx> --- The fix for kernel side is: https://lore.kernel.org/all/20220922201505.2721654-1-kan.liang@xxxxxxxxxxxxxxx/ As suggested by Like Xu in below discussion, we may also need userspace fix since it's easier and more agile to update the perf tool than the kernel code or KVM emulated capabilities. https://lore.kernel.org/all/20220922071017.17398-1-dongli.zhang@xxxxxxxxxx/ tools/perf/builtin-stat.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c index 0b4a62e4ff67..0cde917b9a26 100644 --- a/tools/perf/builtin-stat.c +++ b/tools/perf/builtin-stat.c @@ -762,9 +762,7 @@ static enum counter_recovery stat_handle_error(struct evsel *counter) */ counter->errored = true; - if ((evsel__leader(counter) != counter) || - !(counter->core.leader->nr_members > 1)) - return COUNTER_SKIP; + return COUNTER_SKIP; } else if (evsel__fallback(counter, errno, msg, sizeof(msg))) { if (verbose > 0) ui__warning("%s\n", msg); @@ -843,6 +841,9 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) if (target.use_bpf) break; + if (evsel__leader(counter) != counter && + !evsel__leader(counter)->supported) + continue; if (counter->reset_group || counter->errored) continue; if (evsel__is_bpf(counter)) @@ -901,6 +902,9 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx) evlist__for_each_cpu(evlist_cpu_itr, evsel_list, affinity) { counter = evlist_cpu_itr.evsel; + if (evsel__leader(counter) != counter && + !evsel__leader(counter)->supported) + continue; if (!counter->reset_group && !counter->errored) continue; if (!counter->reset_group) -- 2.34.1