Re: RFC: A KVM-specific alternative to UserfaultFD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 09, 2023, David Matlack wrote:
> On Thu, Nov 9, 2023 at 10:33 AM David Matlack <dmatlack@xxxxxxxxxx> wrote:
> > On Thu, Nov 9, 2023 at 9:58 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> > > For both cases, KVM will need choke points on all accesses to guest memory.  Once
> > > the choke points exist and we have signed up to maintain them, the extra burden of
> > > gracefully handling "missing" memory versus frozen memory should be relatively
> > > small, e.g. it'll mainly be the notify-and-wait uAPI.
> >
> > To be honest, the choke points are a relatively small part of any
> > KVM-based demand paging scheme. We still need (a)-(e) from my original
> > email.
> 
> Another small thing here: I think we can find clean choke point(s)
> that fit both freezing and demand paging (aka "missing" pages), but
> there is a difference to keep in mind. To freeze guest memory KVM only
> needs to return an error at the choke point(s). Whereas handling
> "missing" pages may require blocking, which adds constraints on where
> the choke point(s) can be placed.

Rats, I didn't think about not being able to block.  Luckily, that's *almost* a
non-issue as user accesses already might_sleep().  At a glance, it's only x86's
shadow paging that uses kvm_vcpu_read_guest_atomic(), everything else either can
sleep or uses a gfn_to_pfn_cache or kvm_host_map cache.  Aha!  And all of x86's
usage can fail gracefully (for some definitions of gracefully), i.e. will either
result in the access being retried after dropping mmu_lock or will cause KVM to
zap a SPTE instead of doing something more optimal.





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux