Re: [PATCH v3 6/6] KVM: x86: allow defining return-0 static calls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 18, 2022 at 05:29:20PM +0100, Paolo Bonzini wrote:
> On 3/17/22 18:43, Maxim Levitsky wrote:
> > diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
> > index 20f64e07e359..3388072b2e3b 100644
> > --- a/arch/x86/include/asm/kvm-x86-ops.h
> > +++ b/arch/x86/include/asm/kvm-x86-ops.h
> > @@ -88,7 +88,7 @@ KVM_X86_OP(deliver_interrupt)
> >   KVM_X86_OP_OPTIONAL(sync_pir_to_irr)
> >   KVM_X86_OP_OPTIONAL_RET0(set_tss_addr)
> >   KVM_X86_OP_OPTIONAL_RET0(set_identity_map_addr)
> > -KVM_X86_OP_OPTIONAL_RET0(get_mt_mask)
> > +KVM_X86_OP(get_mt_mask)
> >   KVM_X86_OP(load_mmu_pgd)
> >   KVM_X86_OP(has_wbinvd_exit)
> >   KVM_X86_OP(get_l2_tsc_offset)
> > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> > index a09b4f1a18f6..0c09292b0611 100644
> > --- a/arch/x86/kvm/svm/svm.c
> > +++ b/arch/x86/kvm/svm/svm.c
> > @@ -4057,6 +4057,11 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index)
> >          return true;
> >   }
> > +static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
> > +{
> > +       return 0;
> > +}
> > +
> >   static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu)
> >   {
> >          struct vcpu_svm *svm = to_svm(vcpu);
> > @@ -4718,6 +4723,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
> >          .check_apicv_inhibit_reasons = avic_check_apicv_inhibit_reasons,
> >          .apicv_post_state_restore = avic_apicv_post_state_restore,
> > +       .get_mt_mask = svm_get_mt_mask,
> >          .get_exit_info = svm_get_exit_info,
> >          .vcpu_after_set_cpuid = svm_vcpu_after_set_cpuid,
> 
> Thanks, I'll send it as a complete patch.  Please reply there with your
> Signed-off-by.

Yeah, ret0 should only be used with up-to 'long' return values.

So ACK on that patch.

> Related to this, I don't see anything in arch/x86/kernel/static_call.c that
> limits this code to x86-64:
> 
>                 if (func == &__static_call_return0) {
>                         emulate = code;
>                         code = &xor5rax;
>                 }
> 
> 
> On 32-bit, it will be patched as "dec ax; xor eax, eax" or something like
> that.  Fortunately it doesn't corrupt any callee-save register but it is not
> just a bit funky, it's also not a single instruction.

Urggghh.. that's fairly yuck. So there's two options I suppose:

	0x66, 0x66, 0x66, 0x31, 0xc0

Which is a tripple prefix xor %eax, %eax, which, IIRC should still clear
the whole 64bit on 64bit and *should* still not trigger the prefix
decoding penalty some frontends have (which is >3 IIRC).

Or we can emit:

	0xb8, 0x00, 0x00, 0x00, 0x00

which decodes to: mov $0x0,%eax, which is less efficient in some
front-ends since it doesn't always get picked up in register rename
stage.





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux