On Mon, 11 Nov 2019 at 21:06, Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> wrote: > > Wanpeng Li <kernellwp@xxxxxxxxx> writes: > > > + > > static void vmx_vcpu_run(struct kvm_vcpu *vcpu) > > { > > struct vcpu_vmx *vmx = to_vmx(vcpu); > > @@ -6615,6 +6645,12 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) > > | (1 << VCPU_EXREG_CR3)); > > vcpu->arch.regs_dirty = 0; > > > > + vmx->exit_reason = vmx->fail ? 0xdead : vmcs_read32(VM_EXIT_REASON); > > + vcpu->fast_vmexit = false; > > + if (!is_guest_mode(vcpu) && > > + vmx->exit_reason == EXIT_REASON_MSR_WRITE) > > + vcpu->fast_vmexit = handle_ipi_fastpath(vcpu); > > I have to admit this looks too much to me :-( Yes, I see the benefits of > taking a shortcut (by actualy penalizing all other MSR writes) but the > question I have is: where do we stop? In our iaas environment observation, ICR and TSCDEADLINE are the main MSR write vmexits. Before patch: tscdeadline_immed 3900 tscdeadline 5413 After patch: tscdeadline_immed 3912 tscdeadline 5427 So the penalize can be tolerated. > > Also, this 'shortcut' creates an imbalance in tracing: you don't go down > to kvm_emulate_wrmsr() so handle_ipi_fastpath() should probably gain a > tracepoint. Agreed. > > Looking at 'fast_vmexit' name makes me think this is something > generic. Is this so? Maybe we can create some sort of an infrastructure > for fast vmexit handling and make it easy to hook things up to it? Maybe an indirect jump? But I can have a try. Wanpeng