On Thu, Jan 24, 2019 at 9:59 AM Sean Christopherson <sean.j.christopherson@xxxxxxxxx> wrote: > > Unlike the primary vCPU-run flow, the nested early checks code doesn't > actually want to propagate VM-Fail back 'vmx'. Yay copy+paste. > > In additional to eliminating the need to clear vmx->fail before > returning, using a local boolean also drops a reference to 'vmx' in the > asm blob. Dropping the reference to 'vmx' will save a register in the > long run as future patches will shift all pointer references from 'vmx' > to 'vmx->loaded_vmcs'. > > Fixes: 52017608da33 ("KVM: nVMX: add option to perform early consistency checks via H/W") > Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > --- > arch/x86/kvm/vmx/nested.c | 16 ++++++++++------ > 1 file changed, 10 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c > index b92291b8e54b..495edb36b525 100644 > --- a/arch/x86/kvm/vmx/nested.c > +++ b/arch/x86/kvm/vmx/nested.c > @@ -2717,6 +2717,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) > { > struct vcpu_vmx *vmx = to_vmx(vcpu); > unsigned long cr3, cr4; > + bool vm_fail; > > if (!nested_early_check) > return 0; > @@ -2762,14 +2763,18 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) > /* Check if vmlaunch or vmresume is needed */ > "cmpb $0, %c[launched](%% " _ASM_CX")\n\t" > > + /* > + * VMLAUNCH and VMRESUME clear RFLAGS.{CF,ZF} on VM-Exit, set > + * RFLAGS.CF on VM-Fail Invalid and set RFLAGS.ZF on VM-Fail > + * Valid. vmx_vmenter() directly "returns" RFLAGS, and so the > + * results of VM-Enter is captured via SETBE to vm_fail. > + */ > "call vmx_vmenter\n\t" > > - /* Set vmx->fail accordingly */ > - "setbe %c[fail](%% " _ASM_CX")\n\t" > - : ASM_CALL_CONSTRAINT > + "setbe %%dl\n\t" > + : ASM_CALL_CONSTRAINT, "=dl"(vm_fail) Why hardcode %dl? Can't you use an '=q' constraint and let the compiler choose?