On Fri, Sep 20, 2019 at 05:24:55PM -0400, Andrea Arcangeli wrote: > request_immediate_exit is one of those few cases where the pointer to > function of the method isn't fixed at build time and it requires > special handling because hardware_setup() may override it at runtime. > > Signed-off-by: Andrea Arcangeli <aarcange@xxxxxxxxxx> > --- > arch/x86/kvm/vmx/vmx_ops.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/vmx/vmx_ops.c b/arch/x86/kvm/vmx/vmx_ops.c > index cdcad73935d9..25d441432901 100644 > --- a/arch/x86/kvm/vmx/vmx_ops.c > +++ b/arch/x86/kvm/vmx/vmx_ops.c > @@ -498,7 +498,10 @@ int kvm_x86_ops_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr) > > void kvm_x86_ops_request_immediate_exit(struct kvm_vcpu *vcpu) > { > - vmx_request_immediate_exit(vcpu); > + if (likely(enable_preemption_timer)) > + vmx_request_immediate_exit(vcpu); > + else > + __kvm_request_immediate_exit(vcpu); Rather than wrap this in VMX code, what if we instead take advantage of a monolithic module and add an inline to query enable_preemption_timer? That'd likely save a few CALL/RET/JMP instructions and eliminate __kvm_request_immediate_exit. E.g. something like: if (req_immediate_exit) { kvm_make_request(KVM_REQ_EVENT, vcpu); if (kvm_x86_has_request_immediate_exit()) kvm_x86_request_immediate_exit(vcpu); else smp_send_reschedule(vcpu->cpu); } > } > > void kvm_x86_ops_sched_in(struct kvm_vcpu *kvm, int cpu)