Re: [PATCH v2 10/18] KVM: nVMX: split pieces of prepare_vmcs02() to prepare_vmcs02_early()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2018-09-20 at 12:22 -0700, Jim Mattson wrote:
> On Tue, Aug 28, 2018 at 9:04 AM, Sean Christopherson
> <sean.j.christopherson@xxxxxxxxx> wrote:
> > 
> > Add prepare_vmcs02_early() and move pieces of prepare_vmcs02() to the
> > new function.  prepare_vmcs02_early() writes the bits of vmcs02 that
> > a) must be in place to pass the VMFail consistency checks (assuming
> > vmcs12 is valid) and b) are needed recover from a VMExit, e.g. host
> > state that is loaded on VMExit.  Splitting the functionality will
> > enable KVM to leverage hardware to do VMFail consistency checks via
> > a dry run of VMEnter and recover from a potential VMExit without
> > having to fully initialize vmcs02.
> > 
> > Add prepare_vmcs02_first_run() to handle writing vmcs02 state that
> > comes from vmcs01 and never changes, i.e. we don't need to rewrite
> > any of the vmcs02 that is effectively constant once defined.
> > 
> > Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
> > ---
> >  arch/x86/kvm/vmx.c | 410 +++++++++++++++++++++++++--------------------
> >  1 file changed, 225 insertions(+), 185 deletions(-)
> > 
> > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> > index cb8df73e9b49..492dc154c31e 100644
> > --- a/arch/x86/kvm/vmx.c
> > +++ b/arch/x86/kvm/vmx.c
> > @@ -11904,10 +11904,229 @@ static u64 nested_vmx_calc_efer(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
> >                 return vmx->vcpu.arch.efer & ~(EFER_LMA | EFER_LME);
> >  }
> > 
> > -static void prepare_vmcs02_full(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
> > +static void prepare_vmcs02_first_run(struct vcpu_vmx *vmx)
> >  {
> > -       struct vcpu_vmx *vmx = to_vmx(vcpu);
> > +       /*
> > +        * If we have never launched vmcs02, set the constant vmcs02 state
> > +        * according to L0's settings (vmcs12 is irrelevant here).  Host
> > +        * fields that come from L0 and are not constant, e.g. HOST_CR3,
> > +        * will be set as needed prior to VMLAUNCH/VMRESUME.
> > +        */
> > +       if (vmx->nested.vmcs02.launched)
> > +               return;
> The launched flag gets cleared on logical CPU migration, but we don't
> actually have to execute this code again. Should we introduce a new
> boolean?

Oooh, good point.  I think I'll add that optimization in a separate
patch to make it easier to bisect in case there's something wrong
with the optimization.

> > 
> > +       /* All VMFUNCs are currently emulated through L0 vmexits.  */
> > +       if (cpu_has_vmx_vmfunc())
> > +               vmcs_write64(VM_FUNCTION_CONTROL, 0);
> > +
> > +       if (cpu_has_vmx_posted_intr())
> > +               vmcs_write16(POSTED_INTR_NV, POSTED_INTR_NESTED_VECTOR);
> > +
> > +       if (cpu_has_vmx_msr_bitmap())
> > +               vmcs_write64(MSR_BITMAP, __pa(vmx->nested.vmcs02.msr_bitmap));
> > +
> > +       if (enable_pml)
> > +               vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
> Was the ASSERT(vmx->pml_pg) deliberately dropped?

Yes.  I'll move it to a separate patch and drop the other instance
of the assert from vmx_vcpu_setup().

> > 
> > +       /*
> > +        * Set the MSR load/store lists to match L0's settings.  Only the
> > +        * addresses are constant (for vmcs02), the counts can change based
> > +        * on L2's behavior, e.g. switching to/from long mode.
> > +        */
> > +       vmcs_write32(VM_EXIT_MSR_STORE_COUNT, 0);
> > +       vmcs_write64(VM_EXIT_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.host.val));
> > +       vmcs_write64(VM_ENTRY_MSR_LOAD_ADDR, __pa(vmx->msr_autoload.guest.val));
> > +
> > +       vmx_set_constant_host_state(vmx);
> > +}
> > +
> > +static void prepare_vmcs02_early_full(struct vcpu_vmx *vmx,
> > +                                     struct vmcs12 *vmcs12)
> > +{
> > +       prepare_vmcs02_first_run(vmx);
> > +
> > +       vmcs_write64(VMCS_LINK_POINTER, -1ull);
> > +
> > +       if (enable_vpid) {
> > +               if (nested_cpu_has_vpid(vmcs12) && vmx->nested.vpid02)
> > +                       vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->nested.vpid02);
> > +               else
> > +                       vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->vpid);
> > +       }
> > +}
> > +
> > +static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct vmcs12 *vmcs12)
> > +{
> > +       u32 exec_control, vmcs12_exec_ctrl;
> > +       u64 guest_efer = nested_vmx_calc_efer(vmx, vmcs12);
> > +
> > +       if (vmx->nested.dirty_vmcs12)
> > +               prepare_vmcs02_early_full(vmx, vmcs12);
> > +
> > +       /*
> > +        * HOST_RSP is normally set correctly in vmx_vcpu_run() just before
> > +        * entry, but only if the current (host) sp changed from the value
> > +        * we wrote last (vmx->host_rsp).  This cache is no longer relevant
> > +        * if we switch vmcs, and rather than hold a separate cache per vmcs,
> > +        * here we just force the write to happen on entry.
> > +        */
> > +       vmx->host_rsp = 0;
> > +
> > +       /*
> > +        * PIN CONTROLS
> > +        */
> > +       exec_control = vmcs12->pin_based_vm_exec_control;
> > +
> > +       /* Preemption timer setting is only taken from vmcs01.  */
> > +       exec_control &= ~PIN_BASED_VMX_PREEMPTION_TIMER;
> > +       exec_control |= vmcs_config.pin_based_exec_ctrl;
> > +       if (vmx->hv_deadline_tsc == -1)
> > +               exec_control &= ~PIN_BASED_VMX_PREEMPTION_TIMER;
> > +
> > +       /* Posted interrupts setting is only taken from vmcs12.  */
> > +       if (nested_cpu_has_posted_intr(vmcs12)) {
> > +               vmx->nested.posted_intr_nv = vmcs12->posted_intr_nv;
> > +               vmx->nested.pi_pending = false;
> > +       } else {
> > +               exec_control &= ~PIN_BASED_POSTED_INTR;
> > +       }
> > +       vmcs_write32(PIN_BASED_VM_EXEC_CONTROL, exec_control);
> > +
> > +       /*
> > +        * EXEC CONTROLS
> > +        */
> > +       exec_control = vmx_exec_control(vmx); /* L0's desires */
> > +       exec_control &= ~CPU_BASED_VIRTUAL_INTR_PENDING;
> > +       exec_control &= ~CPU_BASED_VIRTUAL_NMI_PENDING;
> > +       exec_control &= ~CPU_BASED_TPR_SHADOW;
> > +       exec_control |= vmcs12->cpu_based_vm_exec_control;
> > +
> > +       /*
> > +        * Write an illegal value to VIRTUAL_APIC_PAGE_ADDR. Later, if
> > +        * nested_get_vmcs12_pages can't fix it up, the illegal value
> > +        * will result in a VM entry failure.
> > +        */
> > +       if (exec_control & CPU_BASED_TPR_SHADOW) {
> > +               vmcs_write64(VIRTUAL_APIC_PAGE_ADDR, -1ull);
> > +               vmcs_write32(TPR_THRESHOLD, vmcs12->tpr_threshold);
> > +       } else {
> > +#ifdef CONFIG_X86_64
> > +               exec_control |= CPU_BASED_CR8_LOAD_EXITING |
> > +                               CPU_BASED_CR8_STORE_EXITING;
> > +#endif
> > +       }
> > +
> > +       /*
> > +        * A vmexit (to either L1 hypervisor or L0 userspace) is always needed
> > +        * for I/O port accesses.
> > +        */
> > +       exec_control &= ~CPU_BASED_USE_IO_BITMAPS;
> > +       exec_control |= CPU_BASED_UNCOND_IO_EXITING;
> > +       vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, exec_control);
> > +
> > +       /*
> > +        * SECONDARY EXEC CONTROLS
> > +        */
> > +       if (cpu_has_secondary_exec_ctrls()) {
> > +               exec_control = vmx->secondary_exec_control;
> > +
> > +               /* Take the following fields only from vmcs12 */
> > +               exec_control &= ~(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES |
> > +                                 SECONDARY_EXEC_ENABLE_INVPCID |
> > +                                 SECONDARY_EXEC_RDTSCP |
> > +                                 SECONDARY_EXEC_XSAVES |
> > +                                 SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
> > +                                 SECONDARY_EXEC_APIC_REGISTER_VIRT |
> > +                                 SECONDARY_EXEC_ENABLE_VMFUNC);
> > +               if (nested_cpu_has(vmcs12,
> > +                                  CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)) {
> > +                       vmcs12_exec_ctrl = vmcs12->secondary_vm_exec_control &
> > +                               ~SECONDARY_EXEC_ENABLE_PML;
> > +                       exec_control |= vmcs12_exec_ctrl;
> > +               }
> > +
> > +               /* VMCS shadowing for L2 is emulated for now */
> > +               exec_control &= ~SECONDARY_EXEC_SHADOW_VMCS;
> > +
> > +               if (exec_control & SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY)
> > +                       vmcs_write16(GUEST_INTR_STATUS,
> > +                               vmcs12->guest_intr_status);
> > +
> > +               /*
> > +                * Write an illegal value to APIC_ACCESS_ADDR. Later,
> > +                * nested_get_vmcs12_pages will either fix it up or
> > +                * remove the VM execution control.
> > +                */
> > +               if (exec_control & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES)
> > +                       vmcs_write64(APIC_ACCESS_ADDR, -1ull);
> > +
> > +               if (exec_control & SECONDARY_EXEC_ENCLS_EXITING)
> > +                       vmcs_write64(ENCLS_EXITING_BITMAP, -1ull);
> > +
> > +               vmcs_write32(SECONDARY_VM_EXEC_CONTROL, exec_control);
> > +       }
> > +
> > +       /*
> > +        * ENTRY CONTROLS
> > +        *
> > +        * vmcs12's VM_{ENTRY,EXIT}_LOAD_IA32_EFER and VM_ENTRY_IA32E_MODE
> > +        * are emulated by vmx_set_efer() in prepare_vmcs02(), but speculate
> > +        * on the related bits (if supported by the CPU) in the hope that
> > +        * we can avoid VMWrites during vmx_set_efer().
> > +        */
> > +       exec_control = (vmcs12->vm_entry_controls | vmcs_config.vmentry_ctrl) &
> > +                       ~VM_ENTRY_IA32E_MODE & ~VM_ENTRY_LOAD_IA32_EFER;
> > +       if (cpu_has_load_ia32_efer) {
> > +               if (guest_efer & EFER_LMA)
> > +                       exec_control |= VM_ENTRY_IA32E_MODE;
> > +               if (guest_efer != host_efer)
> > +                       exec_control |= VM_ENTRY_LOAD_IA32_EFER;
> > +       }
> > +       vm_entry_controls_init(vmx, exec_control);
> > +
> > +       /*
> > +        * EXIT CONTROLS
> > +        *
> > +        * L2->L1 exit controls are emulated - the hardware exit is to L0 so
> > +        * we should use its exit controls. Note that VM_EXIT_LOAD_IA32_EFER
> > +        * bits may be modified by vmx_set_efer() in prepare_vmcs02().
> > +        */
> > +       exec_control = vmcs_config.vmexit_ctrl;
> > 
> > +       if (cpu_has_load_ia32_efer && guest_efer != host_efer)
> > +               exec_control |= VM_EXIT_LOAD_IA32_EFER;
> This code looks new. I think it's fine, but it doesn't appear to be
> part of the refactoring.

Doh, this should have introduced in patch 6/18, "KVM: nVMX: try to set
EFER bits correctly when init'ing entry controls".

> Reviewed-by: Jim Mattson <jmattson@xxxxxxxxxx>



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux