Nit (because I really suck at case-insensitive searches), please capitalize "KVM: VMX:" in the shortlog. On Fri, Nov 18, 2022, Alexey Dobriyan wrote: > __vmx_vcpu_run_flags() returns "unsigned int" and uses only 2 bits of it > so using "unsigned long" is very much pointless. And __vmx_vcpu_run() and vmx_spec_ctrl_restore_host() take an "unsigned int" as well, i.e. actually relying on an "unsigned long" value won't actually work. On a related topic, this code in __vmx_vcpu_run() is unnecessarily fragile as it relies on VMX_RUN_VMRESUME being in bits 0-7. /* Copy @flags to BL, _ASM_ARG3 is volatile. */ mov %_ASM_ARG3, %bl ... /* Check if vmlaunch or vmresume is needed */ testb $VMX_RUN_VMRESUME, %bl The "byte" logic is another holdover, from when "flags" was just "launched" and was passed in as a boolean. I'll send a proper patch to do: diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S index 0b5db4de4d09..5bd39f63497d 100644 --- a/arch/x86/kvm/vmx/vmenter.S +++ b/arch/x86/kvm/vmx/vmenter.S @@ -69,8 +69,8 @@ SYM_FUNC_START(__vmx_vcpu_run) */ push %_ASM_ARG2 - /* Copy @flags to BL, _ASM_ARG3 is volatile. */ - mov %_ASM_ARG3B, %bl + /* Copy @flags to EBX, _ASM_ARG3 is volatile. */ + mov %_ASM_ARG3L, %ebx lea (%_ASM_SP), %_ASM_ARG2 call vmx_update_host_rsp @@ -106,7 +106,7 @@ SYM_FUNC_START(__vmx_vcpu_run) mov (%_ASM_SP), %_ASM_AX /* Check if vmlaunch or vmresume is needed */ - testb $VMX_RUN_VMRESUME, %bl + test $VMX_RUN_VMRESUME, %ebx /* Load guest registers. Don't clobber flags. */ mov VCPU_RCX(%_ASM_AX), %_ASM_CX @@ -128,7 +128,7 @@ SYM_FUNC_START(__vmx_vcpu_run) /* Load guest RAX. This kills the @regs pointer! */ mov VCPU_RAX(%_ASM_AX), %_ASM_AX - /* Check EFLAGS.ZF from 'testb' above */ + /* Check EFLAGS.ZF from 'test VMX_RUN_VMRESUME' above */ jz .Lvmlaunch /* > Signed-off-by: Alexey Dobriyan <adobriyan@xxxxxxxxx> > --- Reviewed-by: Sean Christopherson <seanjc@xxxxxxxxxx>