Re: [PATCH v1 3/5] KVM: x86: nVMX: VMCS12 field's read/write respects field existence bitmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Oct 05, 2021, Jim Mattson wrote:
> On Tue, Oct 5, 2021 at 1:50 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> >
> > On Tue, Oct 05, 2021, Jim Mattson wrote:
> > > On Tue, Oct 5, 2021 at 10:59 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> > > >
> > > > On Tue, Oct 05, 2021, Jim Mattson wrote:
> > > > > On Tue, Oct 5, 2021 at 9:16 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > On Tue, Sep 28, 2021, Robert Hoo wrote:
> > > > > > > On Fri, 2021-09-03 at 15:11 +0000, Sean Christopherson wrote:
> > > > > > >       You also said, "This is quite the complicated mess for
> > > > > > > something I'm guessing no one actually cares about.  At what point do
> > > > > > > we chalk this up as a virtualization hole and sweep it under the rug?"
> > > > > > > -- I couldn't agree more.
> > > > > >
> > > > > > ...
> > > > > >
> > > > > > > So, Sean, can you help converge our discussion and settle next step?
> > > > > >
> > > > > > Any objection to simply keeping KVM's current behavior, i.e. sweeping this under
> > > > > > the proverbial rug?
> > > > >
> > > > > Adding 8 KiB per vCPU seems like no big deal to me, but, on the other
> > > > > hand, Paolo recently argued that slightly less than 1 KiB per vCPU was
> > > > > unreasonable for VM-exit statistics, so maybe I've got a warped
> > > > > perspective. I'm all for pedantic adherence to the specification, but
> > > > > I have to admit that no actual hypervisor is likely to care (or ever
> > > > > will).
> > > >
> > > > It's not just the memory, it's also the complexity, e.g. to get VMCS shadowing
> > > > working correctly, both now and in the future.
> > >
> > > As far as CPU feature virtualization goes, this one doesn't seem that
> > > complex to me. It's not anywhere near as complex as virtualizing MTF,
> > > for instance, and KVM *claims* to do that! :-)
> >
> > There aren't many things as complex as MTF.  But unlike MTF, this behavior doesn't
> > have a concrete use case to justify the risk vs. reward.  IMO the odds of us breaking
> > something in KVM for "normal" use cases are higher than the odds of an L1 VMM breaking
> > because a VMREAD/VMWRITE didn't fail when it technically should have failed.
> 
> Playing devil's advocate here, because I totally agree with you...
> 
> Who's to say what's "normal"? It's a slippery slope when we start
> making personal value judgments about which parts of the architectural
> specification are important and which aren't.

I agree, but in a very similar case Intel chose to take an erratum instead of
fixing what was in all likelihood a microcode bug, i.e. could have been patched
in the field.  So it's not _just_ personal value judgment, though it's definitely
that too :-)

I'm not saying I'd actively oppose support for strict VMREAD/VMWRITE adherence
to the vCPU model, but I'm also not going to advise anyone to go spend their time
implementing a non-trivial fix for behavior that, AFAIK, doesn't adversely affect
any real world use cases.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux