У пн, 2023-09-25 у 17:03 -0700, Sean Christopherson пише: > On Sun, Sep 24, 2023, Maxim Levitsky wrote: > > This patch series is intended to add some selected information > > to the kvm tracepoints to make it easier to gather insights about > > running nested guests. > > > > This patch series was developed together with a new x86 performance analysis tool > > that I developed recently (https://gitlab.com/maximlevitsky/kvmon) > > which aims to be a better kvm_stat, and allows you at glance > > to see what is happening in a VM, including nesting. > > Rather than add more and more tracepoints, I think we should be more thoughtful > about (a) where we place KVM's tracepoints and (b) giving userspace the necessary > hooks to write BPF programs to extract whatever data is needed at any give time. > > There's simply no way we can iterate fast enough in KVM tracepoints to adapt to > userspace's debug/monitoring needs. E.g. if it turns out someone wants detailed > info on hypercalls that use memory or registers beyond ABCD, the new tracepoints > won't help them. > > If all KVM tracepoints grab "struct kvm_vcpu" and force VMCS "registers" to be > cached (or decached depending on one's viewpoint), then I think that'll serve 99% > of usecases. E.g. the vCPU gives a BPF program kvm_vcpu, vcpu_{vmx,svm}, kvm, etc. > > trace_kvm_exit is good example, where despite all of the information that is captured > by KVM, it's borderline worthless for CPUID and MSR exits because their interesting > information is held in registers and not captured in the VMCS or VMCB. > > There are some on BTF type info issues that I've encountered, but I suspect that's > as much a PEBKAC problem as anything. > While eBPF has its use cases, none of the extra tracepoints were added solely because of the monitoring tool and I do understand that tracepoints are a limited resource. Each added tracepoint/info was added only when it was also found to be useful for regular kvm tracing. Best regards, Maxim Levitsky