On Wed, Jun 15, 2022 at 4:53 PM Jann Horn <jannh@xxxxxxxxxx> wrote: > > On Tue, Jun 14, 2022 at 4:11 AM Sasha Levin <sashal@xxxxxxxxxx> wrote: > > > > From: Paolo Bonzini <pbonzini@xxxxxxxxxx> > > > > [ Upstream commit 6cd88243c7e03845a450795e134b488fc2afb736 ] > > > > If a vCPU is outside guest mode and is scheduled out, it might be in the > > process of making a memory access. A problem occurs if another vCPU uses > > the PV TLB flush feature during the period when the vCPU is scheduled > > out, and a virtual address has already been translated but has not yet > > been accessed, because this is equivalent to using a stale TLB entry. > > > > To avoid this, only report a vCPU as preempted if sure that the guest > > is at an instruction boundary. A rescheduling request will be delivered > > to the host physical CPU as an external interrupt, so for simplicity > > consider any vmexit *not* instruction boundary except for external > > interrupts. > > > > It would in principle be okay to report the vCPU as preempted also > > if it is sleeping in kvm_vcpu_block(): a TLB flush IPI will incur the > > vmentry/vmexit overhead unnecessarily, and optimistic spinning is > > also unlikely to succeed. However, leave it for later because right > > now kvm_vcpu_check_block() is doing memory accesses. Even > > though the TLB flush issue only applies to virtual memory address, > > it's very much preferrable to be conservative. > > > > Reported-by: Jann Horn <jannh@xxxxxxxxxx> > > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > > Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> > > This feature was introduced in commit f38a7b75267f1f (first in 4.16). > I think the fix has to be applied all the way back to there (so > additionally to what you already did, it'd have to be added to 4.19, > 5.4 and 5.10)? > > But it doesn't seem to apply cleanly to those older branches. Paolo, > are you going to send stable backports of this? Also, I think the same thing applies for "KVM: x86: do not set st->preempted when going back to user space"?