While running SVM related instructions (VMRUN/VMSAVE/VMLOAD), some AMD CPUs check EAX against reserved memory regions (e.g. SMM memory on host) before checking VMCB's instruction intercept. If EAX falls into such memory areas, #GP is triggered before #VMEXIT. This causes unexpected #GP under nested virtualization. To solve this problem, this patchset makes KVM trap #GP and emulate these SVM instuctions accordingly. Also newer AMD CPUs will change this behavior by triggering #VMEXIT before #GP. This change is indicated by CPUID_0x8000000A_EDX[28]. Under this circumstance, #GP interception is not required. This patchset supports the new feature. This patchset has been verified with vmrun_errata_test and vmware_backdoor tests of kvm_unit_test on the following configs. Also it was verified that vmware_backdoor can be turned on under nested on nested. * Current CPU: nested, nested on nested * New CPU with X86_FEATURE_SVME_ADDR_CHK: nested, nested on nested v2->v3: * Change the decode function name to x86_decode_emulated_instruction() * Add a new variable, svm_gp_erratum_intercept, to control interception * Turn on VM's X86_FEATURE_SVME_ADDR_CHK feature in svm_set_cpu_caps() * Fix instruction emulation for vmware_backdoor under nested-on-nested * Minor comment fixes v1->v2: * Factor out instruction decode for sharing * Re-org gp_interception() handling for both #GP and vmware_backdoor * Use kvm_cpu_cap for X86_FEATURE_SVME_ADDR_CHK feature support * Add nested on nested support Thanks, -Wei Wei Huang (4): KVM: x86: Factor out x86 instruction emulation with decoding KVM: SVM: Add emulation support for #GP triggered by SVM instructions KVM: SVM: Add support for SVM instruction address check change KVM: SVM: Support #GP handling for the case of nested on nested arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/kvm/svm/svm.c | 128 +++++++++++++++++++++++++---- arch/x86/kvm/x86.c | 62 ++++++++------ arch/x86/kvm/x86.h | 2 + 4 files changed, 152 insertions(+), 41 deletions(-) -- 2.27.0