On Mon, Aug 29, 2022, Wang, Wei W wrote: > On Thursday, August 25, 2022 4:56 PM, Xiaoyao Li wrote: > #if defined(CONFIG_PERF_EVENTS) && defined(CONFIG_CPU_SUP_AMD) > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index d7f8331d6f7e..195debc1bff1 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -1125,37 +1125,29 @@ static inline void pt_save_msr(struct pt_ctx *ctx, u32 addr_range) > > static void pt_guest_enter(struct vcpu_vmx *vmx) > { > - if (vmx_pt_mode_is_system()) > + struct perf_event *event; > + > + if (vmx_pt_mode_is_system() || > + !(vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN)) I don't think the host should trace the guest in the host/guest mode just because the guest isn't tracing itself. I.e. the host still needs to turn off it's own tracing. > return; > > - /* > - * GUEST_IA32_RTIT_CTL is already set in the VMCS. > - * Save host state before VM entry. > - */ > - rdmsrl(MSR_IA32_RTIT_CTL, vmx->pt_desc.host.ctl); > - if (vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN) { > - wrmsrl(MSR_IA32_RTIT_CTL, 0); > - pt_save_msr(&vmx->pt_desc.host, vmx->pt_desc.num_address_ranges); > - pt_load_msr(&vmx->pt_desc.guest, vmx->pt_desc.num_address_ranges); > - } > + event = pt_get_curr_event(); > + perf_event_disable(event); > + vmx->pt_desc.host_event = event; This is effectively what I suggested[*], the main difference being that my version adds dedicated enter/exit helpers so that perf can skip save/restore of the other MSRs. It's easy to extend if perf needs to hand back an event to complete the "exit. bool guest_trace_enabled = vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN; vmx->pt_desc.host_event = intel_pt_guest_enter(guest_trace_enabled); and then on exit bool guest_trace_enabled = vmx->pt_desc.guest.ctl & RTIT_CTL_TRACEEN; intel_pt_guest_exit(vmx->pt_desc.host_event, guest_trace_enabled); [*] https://lore.kernel.org/all/YwecducnM%2FU6tqJT@xxxxxxxxxx > + pt_load_msr(&vmx->pt_desc.guest, vmx->pt_desc.num_address_ranges); > }