Re: [PATCH RFC 0/5] mm/gup: Introduce exclusive GUP pinning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20.06.24 15:08, Mostafa Saleh wrote:
Hi David,

On Wed, Jun 19, 2024 at 09:37:58AM +0200, David Hildenbrand wrote:
Hi,

On 19.06.24 04:44, John Hubbard wrote:
On 6/18/24 5:05 PM, Elliot Berman wrote:
In arm64 pKVM and QuIC's Gunyah protected VM model, we want to support
grabbing shmem user pages instead of using KVM's guestmemfd. These
hypervisors provide a different isolation model than the CoCo
implementations from x86. KVM's guest_memfd is focused on providing
memory that is more isolated than AVF requires. Some specific examples
include ability to pre-load data onto guest-private pages, dynamically
sharing/isolating guest pages without copy, and (future) migrating
guest-private pages.  In sum of those differences after a discussion in
[1] and at PUCK, we want to try to stick with existing shmem and extend
GUP to support the isolation needs for arm64 pKVM and Gunyah.

The main question really is, into which direction we want and can develop
guest_memfd. At this point (after talking to Jason at LSF/MM), I wonder if
guest_memfd should be our new target for guest memory, both shared and
private. There are a bunch of issues to be sorted out though ...

As there is interest from Red Hat into supporting hugetlb-style huge pages
in confidential VMs for real-time workloads, and wasting memory is not
really desired, I'm going to think some more about some of the challenges
(shared+private in guest_memfd, mmap support, migration of !shared folios,
hugetlb-like support, in-place shared<->private conversion, interaction with
page pinning). Tricky.

Ideally, we'd have one way to back guest memory for confidential VMs in the
future.


Can you comment on the bigger design goal here? In particular:

1) Who would get the exclusive PIN and for which reason? When would we
    pin, when would we unpin?

2) What would happen if there is already another PIN? Can we deal with
    speculative short-term PINs from GUP-fast that could introduce
    errors?

3) How can we be sure we don't need other long-term pins (IOMMUs?) in
    the future?

Can you please clarify more about the IOMMU case?

pKVM has no merged upstream IOMMU support at the moment, although
there was an RFC a while a go [1], also there would be a v2 soon.

In the patches KVM (running in EL2) will manage the IOMMUs including
the page tables and all pages used in that are allocated from the
kernel.

These patches don't support IOMMUs for guests. However, I don't see
why would that be different from the CPU? as once the page is pinned
it can be owned by a guest and that would be reflected in the
hypervisor tracking, the CPU stage-2 and IOMMU page tables as well.

So this is my thinking, it might be flawed:

In the "normal" world (e.g., vfio), we FOLL_PIN|FOLL_LONGTERM the pages to be accessible by a dedicated device. We look them up in the page tables to pin them, then we can map them into the IOMMU.

Devices that cannot speak "private memory" should only access shared memory. So we must not have "private memory" mapped into their IOMMU.

Devices that can speak "private memory" may either access shared or private memory. So we may have"private memory" mapped into their IOMMU.


What I see (again, I might be just wrong):

1) How would the device be able to grab/access "private memory", if not
   via the user page tables?

2) How would we be able to convert shared -> private, if there is a
   longterm pin from that IOMMU? We must dynamically unmap it from the
   IOMMU.

I assume when you're saying "In the patches KVM (running in EL2) will manage the IOMMUs including the page tables", this is easily solved by not relying on pinning: KVM just knows what to update and where. (which is a very different model than what VFIO does)

Thanks!

--
Cheers,

David / dhildenb





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux