On 14/12/2014 21:20, Greg Kroah-Hartman wrote: > 3.10-stable review patch. If anyone has any objections, please let me know. > > ------------------ > > From: Nadav Har'El <nyh@xxxxxxxxxx> > > commit bfd0a56b90005f8c8a004baf407ad90045c2b11e upstream. > > If we let L1 use EPT, we should probably also support the INVEPT instruction. > > In our current nested EPT implementation, when L1 changes its EPT table > for L2 (i.e., EPT12), L0 modifies the shadow EPT table (EPT02), and in > the course of this modification already calls INVEPT. But if last level > of shadow page is unsync not all L1's changes to EPT12 are intercepted, > which means roots need to be synced when L1 calls INVEPT. Global INVEPT > should not be different since roots are synced by kvm_mmu_load() each > time EPTP02 changes. > > Reviewed-by: Xiao Guangrong <xiaoguangrong@xxxxxxxxxxxxxxxxxx> > Signed-off-by: Nadav Har'El <nyh@xxxxxxxxxx> > Signed-off-by: Jun Nakajima <jun.nakajima@xxxxxxxxx> > Signed-off-by: Xinhao Xu <xinhao.xu@xxxxxxxxx> > Signed-off-by: Yang Zhang <yang.z.zhang@xxxxxxxxx> > Signed-off-by: Gleb Natapov <gleb@xxxxxxxxxx> > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > [bwh: Backported to 3.2: > - Adjust context, filename > - Simplify handle_invept() as recommended by Paolo - nEPT is not > supported so we always raise #UD] > Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx> > Cc: Vinson Lee <vlee@xxxxxxxxxxxxxxxx> > Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> > > --- > arch/x86/include/uapi/asm/vmx.h | 1 + > arch/x86/kvm/vmx.c | 8 ++++++++ > 2 files changed, 9 insertions(+) > > --- a/arch/x86/include/uapi/asm/vmx.h > +++ b/arch/x86/include/uapi/asm/vmx.h > @@ -65,6 +65,7 @@ > #define EXIT_REASON_EOI_INDUCED 45 > #define EXIT_REASON_EPT_VIOLATION 48 > #define EXIT_REASON_EPT_MISCONFIG 49 > +#define EXIT_REASON_INVEPT 50 > #define EXIT_REASON_PREEMPTION_TIMER 52 > #define EXIT_REASON_WBINVD 54 > #define EXIT_REASON_XSETBV 55 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -6242,6 +6242,12 @@ static int handle_vmptrst(struct kvm_vcp > return 1; > } > > +static int handle_invept(struct kvm_vcpu *vcpu) > +{ > + kvm_queue_exception(vcpu, UD_VECTOR); > + return 1; > +} > + > /* > * The exit handlers return 1 if the exit was handled fully and guest execution > * may resume. Otherwise they set the kvm_run parameter to indicate what needs > @@ -6286,6 +6292,7 @@ static int (*const kvm_vmx_exit_handlers > [EXIT_REASON_PAUSE_INSTRUCTION] = handle_pause, > [EXIT_REASON_MWAIT_INSTRUCTION] = handle_invalid_op, > [EXIT_REASON_MONITOR_INSTRUCTION] = handle_invalid_op, > + [EXIT_REASON_INVEPT] = handle_invept, > }; > > static const int kvm_vmx_max_exit_handlers = > @@ -6512,6 +6519,7 @@ static bool nested_vmx_exit_handled(stru > case EXIT_REASON_VMPTRST: case EXIT_REASON_VMREAD: > case EXIT_REASON_VMRESUME: case EXIT_REASON_VMWRITE: > case EXIT_REASON_VMOFF: case EXIT_REASON_VMON: > + case EXIT_REASON_INVEPT: > /* > * VMX instructions trap unconditionally. This allows L1 to > * emulate them for its L2 guest, i.e., allows 3-level nesting! > > Reviewed-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> Thanks Greg. Paolo -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html