On Sat, Nov 05, 2022 at 01:28:40PM +0000, Marc Zyngier wrote: [...] > > Before: > > > > # perf kvm stat report --vcpu 27 > > > > Analyze events for all VMs, VCPU 27: > > > > VM-EXIT Samples Samples% Time% Min Time Max Time Avg time > > > > Total Samples:0, Total events handled time:0.00us. > > > > After: > > > > # perf kvm stat report --vcpu 27 > > > > Analyze events for all VMs, VCPU 27: > > > > VM-EXIT Samples Samples% Time% Min Time Max Time Avg time > > > > SYS64 808 98.54% 91.24% 0.00us 303.76us 3.46us ( +- 13.54% ) > > WFx 10 1.22% 7.79% 0.00us 69.48us 23.91us ( +- 25.91% ) > > IRQ 2 0.24% 0.97% 0.00us 22.64us 14.82us ( +- 52.77% ) > > > > Total Samples:820, Total events handled time:3068.28us. > > Please educate me: how useful is it to filter on a vcpu number across > all VMs? What sense does it even make? Now "perf kvm" tool is not sophisticated since it doesn't capture VMID and virtual CPU ID together. I think a case is we can spin a program on a specific virtual CPU with taskset in VM, in this way we can check if any bottleneck is caused by VM entry/exit, but I have to say that it's inaccurate if we only filter on VCPU ID, we should consider tracing VMID and VCPU ID together in later's enhancement. > Conversely, what would be the purpose of filtering on a 5th thread of > any process irrespective of what the process does? To me, this is the > same level of non-sense. I agree. > AFAICT, this is just piling more arbitrary data extraction for no > particular reason other than "just because we can", and there is > absolutely no guarantee that this is fit for anyone else's purpose. > > I'd rather you have a generic tracepoint taking the vcpu as a context > and a BPF program that spits out the information people actually need, > keeping things out of the kernel. Or even a tracehook (like the > scheduler does), and let people load a module to dump whatever > information they please. Actually I considered three options: Option 1: Simply add new version's trace events for recording more info. This is not flexible and we even have risk to add more version's trace event if later we might find that more data should traced. This approach is straightforward and the implementation would be simple. This is main reason why finally I choosed to add new trace events. Option 2: use Kprobe to dynamically insert tracepoints; but this means the user must have the corresponding vmlinux file, otherwise, perf tool might inject tracepoint at an incorrect address. This is the main reason I didn't use Kprobe to add dynamic tracepoints. Option 3: As you suggested, I can bind KVM tracepoints with a eBPF program and the eBPF program records perf events. When I reviewed Arm64's kvm_entry / kvm_exit trace events, they don't have vcpu context in the arguments, this means I need to add new trace events for accessing "vcpu" context. Option 1 and 3 both need to add trace events; option 1 is more straightforward solution and this is why it was choosed in current patch set. I recognized that I made a mistake, actually we can modify the trace event's definition for kvm_entry / kvm_exit, note we only modify the trace event's arguments, this will change the trace function's definition but it will not break ABI (the format is exactly same for the user space). Below changes demonstrate what's my proposing: diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 94d33e296e10..16f6b61abfec 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -917,7 +917,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) /************************************************************** * Enter the guest */ - trace_kvm_entry(*vcpu_pc(vcpu)); + trace_kvm_entry(vcpu); guest_timing_enter_irqoff(); ret = kvm_arm_vcpu_enter_exit(vcpu); diff --git a/arch/arm64/kvm/trace_arm.h b/arch/arm64/kvm/trace_arm.h index 33e4e7dd2719..9df4fd30093c 100644 --- a/arch/arm64/kvm/trace_arm.h +++ b/arch/arm64/kvm/trace_arm.h @@ -12,15 +12,15 @@ * Tracepoints for entry/exit to guest */ TRACE_EVENT(kvm_entry, - TP_PROTO(unsigned long vcpu_pc), - TP_ARGS(vcpu_pc), + TP_PROTO(struct kvm_vcpu *vcpu), + TP_ARGS(vcpu), TP_STRUCT__entry( __field( unsigned long, vcpu_pc ) ), TP_fast_assign( - __entry->vcpu_pc = vcpu_pc; + __entry->vcpu_pc = *vcpu_pc(vcpu); ), TP_printk("PC: 0x%016lx", __entry->vcpu_pc) Please let me know your opinion, if you don't object, I can move forward with this approach. > But randomly adding new tracepoints to output a semi-useless field > without any consideration for future-proofing? No, thank you. Okay. Thanks for review! Leo _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm