RE: [RFC PATCH 00/18] KVM: Post-copy live migration for guest_memfd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thursday, July 18, 2024 9:09 AM, James Houghton wrote:
> On Wed, Jul 17, 2024 at 8:03 AM Wang, Wei W <wei.w.wang@xxxxxxxxx>
> wrote:
> >
> > On Wednesday, July 17, 2024 1:10 AM, James Houghton wrote:
> > > You're right that, today, including support for guest-private memory
> > > *only* indeed simplifies things (no async userfaults). I think your
> > > strategy for implementing post-copy would work (so, shared->private
> > > conversion faults for vCPU accesses to private memory, and userfaultfd for
> everything else).
> >
> > Yes, it works and has been used for our internal tests.
> >
> > >
> > > I'm not 100% sure what should happen in the case of a non-vCPU
> > > access to should-be-private memory; today it seems like KVM just
> > > provides the shared version of the page, so conventional use of
> > > userfaultfd shouldn't break anything.
> >
> > This seems to be the trusted IO usage (not aware of other usages,
> > emulated device backends, such as vhost, work with shared pages).
> > Migration support for trusted device passthrough doesn't seem to be
> > architecturally ready yet. Especially for postcopy, AFAIK, even the
> > legacy VM case lacks the support for device passthrough (not sure if you've
> made it internally). So it seems too early to discuss this in detail.
> 
> We don't migrate VMs with passthrough devices.
> 
> I still think the way KVM handles non-vCPU accesses to private memory is
> wrong: surely it is an error, yet we simply provide the shared version of the
> page. *shrug*
> 
> >
> > >
> > > But eventually guest_memfd itself will support "shared" memory,
> >
> > OK, I thought of this. Not sure how feasible it would be to extend
> > gmem for shared memory. I think questions like below need to be
> investigated:
> 
> An RFC for it got posted recently[1]. :)
> 
> > #1 what are the tangible benefits of gmem based shared memory, compared
> to the
> >      legacy shared memory that we have now?
> 
> For [1], unmapping guest memory from the direct map.
> 
> > #2 There would be some gaps to make gmem usable for shared pages. For
> >       example, would it support userspace to map (without security concerns)?
> 
> At least in [1], userspace would be able to mmap it, but KVM would still not be
> able to GUP it (instead going through the normal guest_memfd path).
> 
> > #3 if gmem gets extended to be something like hugetlb (e.g. 1GB), would it
> result
> >      in the same issue as hugetlb?
> 
> Good question. At the end of the day, the problem is that GUP relies on host
> mm page table mappings, and HugeTLB can't map things with PAGE_SIZE PTEs.
> 
> At least as of [1], given that KVM doesn't GUP guest_memfd memory, we don't
> rely on the host mm page table layout, so we don't have the same problem.
> 
> For VMMs that want to catch userspace (or non-GUP kernel) accesses via a
> guest_memfd VMA, then it's possible it has the same issue. But for VMMs that
> don't care to catch these kinds of accesses (the kind of user that would use
> KVM Userfault to implement post-copy), it doesn't matter.
> 
> [1]: https://lore.kernel.org/kvm/20240709132041.3625501-1-
> roypat@xxxxxxxxxxxx/

Ah, I overlooked this series, thanks for the reminder.
Let me check the details first. 




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux