Re: [PATCH v1 3/5] KVM: x86: nVMX: VMCS12 field's read/write respects field existence bitmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2021-10-08 at 16:49 -0700, Jim Mattson wrote:
> We have some internal patches for virtualizing VMCS shadowing which
> may break if there is a guest VMCS field with index greater than
> VMX_VMCS_ENUM.MAX_INDEX. I plan to upstream them soon.

OK, thanks for letting us know.:-)
> 
> On Fri, Oct 8, 2021 at 8:09 AM Robert Hoo <robert.hu@xxxxxxxxxxxxxxx>
> wrote:
> > 
> > On Fri, 2021-10-08 at 16:23 +0800, Yu Zhang wrote:
> > > On Tue, Oct 05, 2021 at 11:22:15PM +0000, Sean Christopherson
> > > wrote:
> > > > On Tue, Oct 05, 2021, Jim Mattson wrote:
> > > > > On Tue, Oct 5, 2021 at 1:50 PM Sean Christopherson <
> > > > > seanjc@xxxxxxxxxx> wrote:
> > > > > > 
> > > > > > On Tue, Oct 05, 2021, Jim Mattson wrote:
> > > > > > > On Tue, Oct 5, 2021 at 10:59 AM Sean Christopherson <
> > > > > > > seanjc@xxxxxxxxxx> wrote:
> > > > > > > > 
> > > > > > > > On Tue, Oct 05, 2021, Jim Mattson wrote:
> > > > > > > > > On Tue, Oct 5, 2021 at 9:16 AM Sean Christopherson <
> > > > > > > > > seanjc@xxxxxxxxxx> wrote:
> > > > > > > > > > 
> > > > > > > > > > On Tue, Sep 28, 2021, Robert Hoo wrote:
> > > > > > > > > > > On Fri, 2021-09-03 at 15:11 +0000, Sean
> > > > > > > > > > > Christopherson wrote:
> > > > > > > > > > >       You also said, "This is quite the
> > > > > > > > > > > complicated
> > > > > > > > > > > mess for
> > > > > > > > > > > something I'm guessing no one actually cares
> > > > > > > > > > > about.  At what point do
> > > > > > > > > > > we chalk this up as a virtualization hole and
> > > > > > > > > > > sweep
> > > > > > > > > > > it under the rug?"
> > > > > > > > > > > -- I couldn't agree more.
> > > > > > > > > > 
> > > > > > > > > > ...
> > > > > > > > > > 
> > > > > > > > > > > So, Sean, can you help converge our discussion
> > > > > > > > > > > and
> > > > > > > > > > > settle next step?
> > > > > > > > > > 
> > > > > > > > > > Any objection to simply keeping KVM's current
> > > > > > > > > > behavior,
> > > > > > > > > > i.e. sweeping this under
> > > > > > > > > > the proverbial rug?
> > > > > > > > > 
> > > > > > > > > Adding 8 KiB per vCPU seems like no big deal to me,
> > > > > > > > > but,
> > > > > > > > > on the other
> > > > > > > > > hand, Paolo recently argued that slightly less than 1
> > > > > > > > > KiB
> > > > > > > > > per vCPU was
> > > > > > > > > unreasonable for VM-exit statistics, so maybe I've
> > > > > > > > > got a
> > > > > > > > > warped
> > > > > > > > > perspective. I'm all for pedantic adherence to the
> > > > > > > > > specification, but
> > > > > > > > > I have to admit that no actual hypervisor is likely
> > > > > > > > > to
> > > > > > > > > care (or ever
> > > > > > > > > will).
> > > > > > > > 
> > > > > > > > It's not just the memory, it's also the complexity,
> > > > > > > > e.g. to
> > > > > > > > get VMCS shadowing
> > > > > > > > working correctly, both now and in the future.
> > > > > > > 
> > > > > > > As far as CPU feature virtualization goes, this one
> > > > > > > doesn't
> > > > > > > seem that
> > > > > > > complex to me. It's not anywhere near as complex as
> > > > > > > virtualizing MTF,
> > > > > > > for instance, and KVM *claims* to do that! :-)
> > > > > > 
> > > > > > There aren't many things as complex as MTF.  But unlike
> > > > > > MTF,
> > > > > > this behavior doesn't
> > > > > > have a concrete use case to justify the risk vs.
> > > > > > reward.  IMO
> > > > > > the odds of us breaking
> > > > > > something in KVM for "normal" use cases are higher than the
> > > > > > odds of an L1 VMM breaking
> > > > > > because a VMREAD/VMWRITE didn't fail when it technically
> > > > > > should
> > > > > > have failed.
> > > > > 
> > > > > Playing devil's advocate here, because I totally agree with
> > > > > you...
> > > > > 
> > > > > Who's to say what's "normal"? It's a slippery slope when we
> > > > > start
> > > > > making personal value judgments about which parts of the
> > > > > architectural
> > > > > specification are important and which aren't.
> > > > 
> > > > I agree, but in a very similar case Intel chose to take an
> > > > erratum
> > > > instead of
> > > > fixing what was in all likelihood a microcode bug, i.e. could
> > > > have
> > > > been patched
> > > > in the field.  So it's not _just_ personal value judgment,
> > > > though
> > > > it's definitely
> > > > that too :-)
> > > > 
> > > > I'm not saying I'd actively oppose support for strict
> > > > VMREAD/VMWRITE adherence
> > > > to the vCPU model, but I'm also not going to advise anyone to
> > > > go
> > > > spend their time
> > > > implementing a non-trivial fix for behavior that, AFAIK,
> > > > doesn't
> > > > adversely affect
> > > > any real world use cases.
> > > > 
> > > 
> > > Thank you all for the discussion, Sean & Jim.
> > > 
> > > Could we draw a conclusion to just keep KVM as it is now? If yes,
> > > how
> > > about we
> > > depricate the check against max index value from
> > > MSR_IA32_VMX_VMCS_ENUM in vmx.c
> > > of the kvm-unit-test?
> > > 
> > > After all, we have not witnessed any real system doing so.
> > > 
> > > E.g.,
> > > 
> > > diff --git a/x86/vmx.c b/x86/vmx.c
> > > index f0b853a..63623e5 100644
> > > --- a/x86/vmx.c
> > > +++ b/x86/vmx.c
> > > @@ -380,8 +380,7 @@ static void test_vmwrite_vmread(void)
> > >         vmcs_enum_max = (rdmsr(MSR_IA32_VMX_VMCS_ENUM) &
> > > VMCS_FIELD_INDEX_MASK)
> > >                         >> VMCS_FIELD_INDEX_SHIFT;
> > >         max_index = find_vmcs_max_index();
> > > -       report(vmcs_enum_max == max_index,
> > > -              "VMX_VMCS_ENUM.MAX_INDEX expected: %x, actual:
> > > %x",
> > > +       printf("VMX_VMCS_ENUM.MAX_INDEX expected: %x, actual:
> > > %x",
> > >                max_index, vmcs_enum_max);
> > > 
> > >         assert(!vmcs_clear(vmcs));
> > > 
> > > B.R.
> > > Yu
> > 
> > I think this patch series has its value of fixing the be-forced
> > hard-
> > code VMX_VMCS_ENUM.
> > My understanding of Sean's "simply keeping KVM's current behavior,
> > i.e.
> > sweeping this under the proverbial rug", is about vmcs shadowing
> > will
> > fail some VMCS field validation. Of course, this in turn will fail
> > some
> > case of this KVM unit test case (theoretically), though we haven't
> > met
> > yet.
> > 
> > 




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux