On Tue, Aug 20, 2019 at 02:43:32PM +0300, Mihai Donțu wrote: > On Tue, 2019-08-20 at 08:44 +0000, Nicusor CITU wrote: > > > > > > +static void vmx_msr_intercept(struct kvm_vcpu *vcpu, unsigned > > > > > > int > > > > > > msr, > > > > > > + bool enable) > > > > > > +{ > > > > > > + struct vcpu_vmx *vmx = to_vmx(vcpu); > > > > > > + unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap; > > > > > > Is KVMI intended to play nice with nested virtualization? Unconditionally > > > updating vmcs01.msr_bitmap is correct regardless of whether the vCPU > > > is in L1 or L2, but if the vCPU is currently in L2 then the effective > > > bitmap, i.e. vmcs02.msr_bitmap, won't be updated until the next nested VM- > > > Enter. > > > > Our initial proof of concept was running with success in nested > > virtualization. But most of our tests were done on bare-metal. > > We do however intend to make it fully functioning on nested systems > > too. > > > > Even thought, from KVMI point of view, the MSR interception > > configuration would be just fine if it gets updated before the vcpu is > > actually entering to nested VM. > > > > I believe Sean is referring here to the case where the guest being > introspected is a hypervisor (eg. Windows 10 with device guard). Yep. > Even though we are looking at how to approach this scenario, the > introspection tools we have built will refuse to attach to a > hypervisor. In that case, it's probably a good idea to make KVMI mutually exclusive with nested virtualization. Doing so should, in theory, simplify the implementation and expedite upstreaming, e.g. reviewers don't have to nitpick edge cases related to nested virt. My only hesitation in disabling KVMI when nested virt is enabled is that it could make it much more difficult to (re)enable the combination in the future.