2018-01-31 12:37-0500, Paolo Bonzini: > On 30/01/2018 11:23, Radim Krčmář wrote: > > 2018-01-27 09:50+0100, Paolo Bonzini: > >> Place the MSR bitmap in struct loaded_vmcs, and update it in place > >> every time the x2apic or APICv state can change. This is rare and > >> the loop can handle 64 MSRs per iteration, in a similar fashion as > >> nested_vmx_prepare_msr_bitmap. > >> > >> This prepares for choosing, on a per-VM basis, whether to intercept > >> the SPEC_CTRL and PRED_CMD MSRs. > >> > >> Suggested-by: Jim Mattson <jmattson@xxxxxxxxxx> > >> Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > >> --- > >> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > >> @@ -10022,7 +10043,7 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu, > >> int msr; > >> struct page *page; > >> unsigned long *msr_bitmap_l1; > >> - unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.msr_bitmap; > >> + unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap; > > > > The physical address of the nested msr_bitmap is never loaded into vmcs. > > > > The resolution you provided had extra hunk in prepare_vmcs02_full(): > > > > + vmcs_write64(MSR_BITMAP, __pa(vmx->nested.vmcs02.msr_bitmap)); > > > > I have queued that as: > > > > + if (cpu_has_vmx_msr_bitmap()) > > + vmcs_write64(MSR_BITMAP, __pa(vmx->nested.vmcs02.msr_bitmap)); > > Hmm you're right, it should be in prepare_vmcs02() here (4.15-based), > and then moved to prepare_vmcs02_full() as part of the conflict resolution. It also makes sense to have it in nested_get_vmcs12_pages, where we call nested_vmx_prepare_msr_bitmap() and disable MSR bitmaps. > I'll send a v3. Thanks.