Hi Chao, > -----Original Message----- > From: Chao Gao <chao.gao@xxxxxxxxx> > Sent: Sunday, January 29, 2023 10:42 PM > To: Kechen Lu <kechenl@xxxxxxxxxx> > Cc: kvm@xxxxxxxxxxxxxxx; seanjc@xxxxxxxxxx; pbonzini@xxxxxxxxxx; > zhi.wang.linux@xxxxxxxxx; shaoqin.huang@xxxxxxxxx; > vkuznets@xxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx > Subject: Re: [RFC PATCH v6 5/6] KVM: x86: add vCPU scoped toggling for > disabled exits > > External email: Use caution opening links or attachments > > > On Sat, Jan 21, 2023 at 02:07:37AM +0000, Kechen Lu wrote: > >+static void svm_update_disabled_exits(struct kvm_vcpu *vcpu) > > Is it possible to call this function on vCPU creation, i.e., consolidate > initialization and runtime toggling? > Chao, can you elaborate on this? If I understand correctly, you mean replacing the current redundant code on vCPU creation for checking the xxx_in_guest and set intercept, while instead, calling this svm/vmx_update_disabled_exits()? Yeah, I think this makes sense to me. BR, Kechen > >+{ > >+ struct vcpu_svm *svm = to_svm(vcpu); > >+ struct vmcb_control_area *control = &svm->vmcb->control; > >+ > >+ if (kvm_hlt_in_guest(vcpu)) > >+ svm_clr_intercept(svm, INTERCEPT_HLT); > >+ else > >+ svm_set_intercept(svm, INTERCEPT_HLT); > >+ > >+ if (kvm_mwait_in_guest(vcpu)) { > >+ svm_clr_intercept(svm, INTERCEPT_MONITOR); > >+ svm_clr_intercept(svm, INTERCEPT_MWAIT); > >+ } else { > >+ svm_set_intercept(svm, INTERCEPT_MONITOR); > >+ svm_set_intercept(svm, INTERCEPT_MWAIT); > >+ } > >+ > >+ if (kvm_pause_in_guest(vcpu)) { > >+ svm_clr_intercept(svm, INTERCEPT_PAUSE); > >+ } else { > >+ control->pause_filter_count = pause_filter_count; > >+ if (pause_filter_thresh) > >+ control->pause_filter_thresh = pause_filter_thresh; > >+ } > >+} > >+ > > static void svm_vm_destroy(struct kvm *kvm) { > > avic_vm_destroy(kvm); > >@@ -4825,7 +4852,10 @@ static struct kvm_x86_ops svm_x86_ops > __initdata = { > > .complete_emulated_msr = svm_complete_emulated_msr, > > > > .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector, > >+ > > .vcpu_get_apicv_inhibit_reasons = > > avic_vcpu_get_apicv_inhibit_reasons, > >+ > >+ .update_disabled_exits = svm_update_disabled_exits, > > }; > > > > /* > >diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index > >019a20029878..f5137afdd424 100644 > >--- a/arch/x86/kvm/vmx/vmx.c > >+++ b/arch/x86/kvm/vmx/vmx.c > >@@ -8070,6 +8070,41 @@ static void vmx_vm_destroy(struct kvm *kvm) > > free_pages((unsigned long)kvm_vmx->pid_table, > >vmx_get_pid_table_order(kvm)); } > > > >+static void vmx_update_disabled_exits(struct kvm_vcpu *vcpu) > > ditto.