Re: [PATCH] KVM: x86: Use gfn_to_pfn_cache for steal_time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2024-08-16 at 17:22 -0700, Sean Christopherson wrote:
> On Fri, Aug 02, 2024, David Woodhouse wrote:
> > On Fri, 2024-08-02 at 11:44 +0000, Carsten Stollmaier wrote:
> > > On vcpu_run, before entering the guest, the update of the steal time
> > > information causes a page-fault if the page is not present. In our
> > > scenario, this gets handled by do_user_addr_fault and successively
> > > handle_userfault since we have the region registered to that.
> > > 
> > > handle_userfault uses TASK_INTERRUPTIBLE, so it is interruptible by
> > > signals. do_user_addr_fault then busy-retries it if the pending signal
> > > is non-fatal. This leads to contention of the mmap_lock.
> > 
> > The busy-loop causes so much contention on mmap_lock that post-copy
> > live migration fails to make progress, and is leading to failures. Yes?
> > 
> > > This patch replaces the use of gfn_to_hva_cache with gfn_to_pfn_cache,
> > > as gfn_to_pfn_cache ensures page presence for the memory access,
> > > preventing the contention of the mmap_lock.
> > > 
> > > Signed-off-by: Carsten Stollmaier <stollmc@xxxxxxxxxx>
> > 
> > Reviewed-by: David Woodhouse <dwmw@xxxxxxxxxxxx>
> > 
> > I think this makes sense on its own, as it addresses the specific case
> > where KVM is *likely* to be touching a userfaulted (guest) page. And it
> > allows us to ditch yet another explicit asm exception handler.
> 
> At the cost of using a gpc, which has its own complexities.
> 
> But I don't understand why steal_time is special.  If the issue is essentially
> with handle_userfault(), can't this happen on any KVM uaccess?

Theoretically, yes. The steal time is only special in that it happens
so *often*, every time the vCPU is scheduled in.

We should *also* address the general case, perhaps making by
interruptible user access functions as discussed. But this solves the
immediate issue which is being observed, *and* lets us ditch the last
explicit asm exception handling in kvm/x86.c which is why I think it's
worth doing anyway, even if there's an upcoming fix for the general
case.


Attachment: smime.p7s
Description: S/MIME cryptographic signature


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux