Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21.06.24 01:54, Sean Christopherson wrote:
On Thu, Jun 20, 2024, Jason Gunthorpe wrote:
On Thu, Jun 20, 2024 at 01:30:29PM -0700, Sean Christopherson wrote:
I.e. except for blatant bugs, e.g. use-after-free, we need to be able to guarantee
with 100% accuracy that there are no outstanding mappings when converting a page
from shared=>private.  Crossing our fingers and hoping that short-term GUP will
have gone away isn't enough.

To be clear it is not crossing fingers. If the page refcount is 0 then
there are no references to that memory anywhere at all. It is 100%
certain.

It may take time to reach zero, but when it does it is safe.

Yeah, we're on the same page, I just didn't catch the implicit (or maybe it was
explicitly stated earlier) "wait for the refcount to hit zero" part that David
already clarified.
Many things rely on this property, including FSDAX.

For non-CoCo VMs, I expect we'll want to be much more permissive, but I think
they'll be a complete non-issue because there is no shared vs. private to worry
about.  We can simply allow any and all userspace mappings for guest_memfd that is
attached to a "regular" VM, because a misbehaving userspace only loses whatever
hardening (or other benefits) was being provided by using guest_memfd.  I.e. the
kernel and system at-large isn't at risk.

It does seem to me like guest_memfd should really focus on the private
aspect.

We'll likely have to enter that domain for clean huge page support and/or pKVM here either way.

Likely the future will see a mixture of things: some will use guest_memfd only for the "private" parts and anon/shmem for the "shared" parts, others will use guest_memfd for both.


If we need normal memfd enhancements of some kind to work better with
KVM then that may be a better option than turning guest_memfd into
memfd.

Heh, and then we'd end up turning memfd into guest_memfd.  As I see it, being
able to safely map TDX/SNP/pKVM private memory is a happy side effect that is
possible because guest_memfd isn't subordinate to the primary MMU, but private
memory isn't the core idenity of guest_memfd.

Right.


The thing that makes guest_memfd tick is that it's guest-first, i.e. allows mapping
memory into the guest with more permissions/capabilities than the host.  E.g. access
to private memory, hugepage mappings when the host is forced to use small pages,
RWX mappings when the host is limited to RO, etc.

We could do a subset of those for memfd, but I don't see the point, assuming we
allow mmap() on shared guest_memfd memory.  Solving mmap() for VMs that do
private<=>shared conversions is the hard problem to solve.  Once that's done,
we'll get support for regular VMs along with the other benefits of guest_memfd
for free (or very close to free).

I suspect there would be pushback from Hugh trying to teach memfd things it really shouldn't be doing.

I once shared the idea of having a guest_memfd+memfd pair (managed by KVM or whatever more genric virt infrastructure), whereby we could move folios back and forth and only the memfd pages can be mapped and consequently pinned. Of course, we could only move full folios, which implies some kind of option b) for handling larger memory chunks (gigantic pages).

But I'm not sure if that is really required and it wouldn't be just easier to let the guest_memfd be mapped but only shared pages are handed out.

--
Cheers,

David / dhildenb





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux