On Mon, Aug 02, 2021, Jim Mattson wrote: > On Mon, Aug 2, 2021 at 3:21 PM Krish Sadhukhan <krish.sadhukhan@xxxxxxxxxx> wrote: > > > > > > On 8/2/21 9:39 AM, Paolo Bonzini wrote: > > > On 02/08/21 18:34, Sean Christopherson wrote: > > >> On Mon, Aug 02, 2021, Paolo Bonzini wrote: > > >>> On 21/06/21 22:43, Krish Sadhukhan wrote: > > >>>> With this patch KVM entry and exit tracepoints will > > >>>> show "guest_mode = 0" if it is a guest and "guest_mode = 1" if it is a > > >>>> nested guest. > > >>> > > >>> What about adding a "(nested)" suffix for L2, and nothing for L1? > > >> > > >> That'd work too, though it would be nice to get vmcx12 printed as well > > >> so that it would be possible to determine which L2 is running without > > >> having to cross-reference other tracepoints. > > > > > > Yes, it would be nice but it would also clutter the output a bit. But with my gross hack, it'd only clutter nested entries/exits. > > > It's like what we have already in kvm_inj_exception: > > > > > > TP_printk("%s (0x%x)", > > > __print_symbolic(__entry->exception, kvm_trace_sym_exc), > > > /* FIXME: don't print error_code if not present */ > > > __entry->has_error ? __entry->error_code : 0) > > > > > > It could be done with a trace-cmd plugin, but that creates other issues > > > since it essentially forces the tracepoints to have a stable API. > > > > Also, the vmcs/vmcb address is vCPU-specific, so if L2 runs on 10 vCPUs, > > traces will show 10 different addresses for the same L2 and it's not > > convenient on a cloud host where hundreds of L1s and L2s run. > > The vmcx02 address is vCPU-specific. However, Sean asked for the > vmcx12 address, which is a GPA that is common across all vCPUs. Ya. Obviously it doesn't help identifying L2 vCPU relationships, e.g. if an L2 VM runs 10 vCPUs of its own, but in most cases the sequence of what was run for a given L1 vCPU is what's interesting and relevant, whereas knowing which L2 vCPUs belong to which L2 VM isn't often critical information for debug/triage.