Re: [PATCH 2/2] kvm: nVMX: Single-step traps trump expired VMX-preemption timer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 17, 2020 at 9:21 PM Sean Christopherson
<sean.j.christopherson@xxxxxxxxx> wrote:
>
> On Wed, Apr 15, 2020 at 04:33:31PM -0700, Jim Mattson wrote:
> > On Tue, Apr 14, 2020 at 5:12 PM Sean Christopherson
> > <sean.j.christopherson@xxxxxxxxx> wrote:
> > >
> > > On Tue, Apr 14, 2020 at 09:47:53AM -0700, Jim Mattson wrote:
> > > > Regarding -EBUSY, I'm in complete agreement. However, I'm not sure
> > > > what the potential confusion is regarding the event. Are you
> > > > suggesting that one might think that we have a #DB to deliver to L1
> > > > while we're in guest mode? IIRC, that can happen under SVM, but I
> > > > don't believe it can happen under VMX.
> > >
> > > The potential confusion is that vcpu->arch.exception.pending was already
> > > checked, twice.  It makes one wonder why it needs to be checked a third
> > > time.  And actually, I think that's probably a good indicator that singling
> > > out single-step #DB isn't the correct fix, it just happens to be the only
> > > case that's been encountered thus far, e.g. a #PF when fetching the instr
> > > for emulation should also get priority over the preemption timer.  On real
> > > hardware, expiration of the preemption timer while vectoring a #PF wouldn't
> > > wouldn't get recognized until the next instruction boundary, i.e. at the
> > > start of the first instruction of the #PF handler.  Dropping the #PF isn't
> > > a problem in most cases, because unlike the single-step #DB, it will be
> > > re-encountered when L1 resumes L2.  But, dropping the #PF is still wrong.
> >
> > Yes, it's wrong in the abstract, but with respect to faults and the
> > VMX-preemption timer expiration, is there any way for either L1 or L2
> > to *know* that the virtual CPU has done something wrong?
>
> I don't think so?  But how is that relevant, i.e. if we can fix KVM instead
> of fudging the result, why wouldn't we fix KVM?

I'm not sure that I can fix KVM. The missing #DB traps were relatively
straightforward, but as for the rest of this mess...

Since you seem to have a handle on what needs to be done, I will defer to you.

> > Isn't it generally true that if you have an exception queued when you
> > transition from L2 to L1, then you've done something wrong? I wonder
> > if the call to kvm_clear_exception_queue() in prepare_vmcs12() just
> > serves to sweep a whole collection of problems under the rug.
>
> More than likely, yes.
>
> > > In general, interception of an event doesn't change the priority of events,
> > > e.g. INTR shouldn't get priority over NMI just because if L1 wants to
> > > intercept INTR but not NMI.
> >
> > Yes, but that's a different problem altogether.
>
> But isn't the fix the same?  Stop processing events if a higher priority
> event is pending, regardless of whether the event exits to L1.

That depends on how you see the scope of the problem. One could argue
that the fix for everything that is wrong with KVM is actually the
same: properly emulate the physical CPU.



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux