KVM's assembly for transitioning to/from a VMX guest is currently implemented via inline asm. At best it can be called "inscrutable", at worst, well, that can't be printed here. This series' ultimate goal is to move the transition code to a proper assembly sub-routine that can be directly invoked from C code. Unsurprisingly, making that happen requires a large number of patches to carefully disarm all of the booby traps hiding in the shadows. This series does NOT apply directly on the official KVM branches, but rather on the official branches plus the patch that splits out a small amount of vmx_vcpu_run() code to a helper, __vmx_vcpu_run()[1]. Adding the helper function fixes a bug for kernel v5.0, i.e. absolutely should be applied before this series, and not accounting for that change would result in non-trivial conflicts. A few patches in this series are carried over from the back half of the series that moved VM-Enter and VM-Exit to proper assembly[2], but is versioned as a different series given the much more ambitious end goal. v1: https://patchwork.kernel.org/cover/10771525/ v2: - Fully tested 32-bit, amazingly there was no breakage. - Use 'b' and '=b' for asm constraints instead of trying to get fancy with 'bl' and '=ebx'. [Paolo] - Rename explicit VCPU reg indicides to __VCPU_REGS_R*. [Paolo] - Add Jim and Konrad's Reviewed-by tags. [1] https://patchwork.kernel.org/patch/10765309/ [2] https://patchwork.kernel.org/cover/10739549/ Sean Christopherson (29): KVM: VMX: Compare only a single byte for VMCS' "launched" in vCPU-run KVM: nVMX: Check a single byte for VMCS "launched" in nested early checks KVM: VMX: Modify only RSP when creating a placeholder for guest's RCX KVM: VMX: Save RSI to an unused output in the vCPU-run asm blob KVM: VMX: Manually load RDX in vCPU-run asm blob KVM: VMX: Let the compiler save/load RDX during vCPU-run KVM: nVMX: Remove a rogue "rax" clobber from nested_vmx_check_vmentry_hw() KVM: nVMX: Drop STACK_FRAME_NON_STANDARD from nested_vmx_check_vmentry_hw() KVM: nVMX: Explicitly reference the scratch reg in nested early checks KVM: nVMX: Capture VM-Fail to a local var in nested_vmx_check_vmentry_hw() KVM: nVMX: Capture VM-Fail via CC_{SET,OUT} in nested early checks KVM: nVMX: Reference vmx->loaded_vmcs->launched directly KVM: nVMX: Let the compiler select the reg for holding HOST_RSP KVM: nVMX: Cache host_rsp on a per-VMCS basis KVM: VMX: Load/save guest CR2 via C code in __vmx_vcpu_run() KVM: VMX: Update VMCS.HOST_RSP via helper C function KVM: VMX: Pass "launched" directly to the vCPU-run asm blob KVM: VMX: Invert the ordering of saving guest/host scratch reg at VM-Enter KVM: VMX: Don't save guest registers after VM-Fail KVM: VMX: Use vcpu->arch.regs directly when saving/loading guest state KVM: x86: Explicitly #define the VCPU_REGS_* indices KVM: VMX: Use #defines in place of immediates in VM-Enter inline asm KVM: VMX: Create a stack frame in vCPU-run KVM: VMX: Move vCPU-run code to a proper assembly routine KVM: VMX: Fold __vmx_vcpu_run() back into vmx_vcpu_run() KVM: VMX: Rename ____vmx_vcpu_run() to __vmx_vcpu_run() KVM: VMX: Use RAX as the scratch register during vCPU-run KVM: VMX: Make the vCPU-run asm routine callable from C KVM: VMX: Reorder clearing of registers in the vCPU-run assembly flow arch/x86/include/asm/kvm_host.h | 33 +++--- arch/x86/include/asm/kvm_vcpu_regs.h | 25 +++++ arch/x86/kvm/vmx/nested.c | 53 ++++----- arch/x86/kvm/vmx/vmcs.h | 1 + arch/x86/kvm/vmx/vmenter.S | 159 ++++++++++++++++++++++++++ arch/x86/kvm/vmx/vmx.c | 160 +++------------------------ arch/x86/kvm/vmx/vmx.h | 3 +- 7 files changed, 241 insertions(+), 193 deletions(-) create mode 100644 arch/x86/include/asm/kvm_vcpu_regs.h -- 2.20.1