Re: [PATCH v13 10/24] gunyah: vm_mgr: Add/remove user memory regions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Will,

On 6/22/2023 4:56 PM, Elliot Berman wrote:


On 6/7/2023 8:54 AM, Elliot Berman wrote:


On 6/5/2023 7:18 AM, Will Deacon wrote:
Hi Elliot,

[+Quentin since he's looked at the MMU notifiers]

Sorry for the slow response, I got buried in email during a week away.

On Fri, May 19, 2023 at 10:02:29AM -0700, Elliot Berman wrote:
On 5/19/2023 4:59 AM, Will Deacon wrote:
On Tue, May 09, 2023 at 01:47:47PM -0700, Elliot Berman wrote:
+    ret = account_locked_vm(ghvm->mm, mapping->npages, true);
+    if (ret)
+        goto free_mapping;
+
+    mapping->pages = kcalloc(mapping->npages, sizeof(*mapping->pages), GFP_KERNEL_ACCOUNT);
+    if (!mapping->pages) {
+        ret = -ENOMEM;
+        mapping->npages = 0; /* update npages for reclaim */
+        goto unlock_pages;
+    }
+
+    gup_flags = FOLL_LONGTERM;
+    if (region->flags & GH_MEM_ALLOW_WRITE)
+        gup_flags |= FOLL_WRITE;
+
+    pinned = pin_user_pages_fast(region->userspace_addr, mapping->npages,
+                    gup_flags, mapping->pages);
+    if (pinned < 0) {
+        ret = pinned;
+        goto free_pages;
+    } else if (pinned != mapping->npages) {
+        ret = -EFAULT;
+        mapping->npages = pinned; /* update npages for reclaim */
+        goto unpin_pages;
+    }

Sorry if I missed it, but I still don't see where you reject file mappings
here.


Sure, I can reject file mappings. I didn't catch that was the ask previously
and thought it was only a comment about behavior of file mappings.

I thought the mention of filesystem corruption was clear enough! It's
definitely something we shouldn't allow.

This is also the wrong interface for upstream. Please get involved with the fd-based guest memory discussions [1] and port your series to that.


The user interface design for *shared* memory aligns with
KVM_SET_USER_MEMORY_REGION.

I don't think it does. For example, file mappings don't work (as above),
you're placing additional rlimit requirements on the caller, read-only
memslots are not functional, the memory cannot be swapped or migrated,
dirty logging doesn't work etc. pKVM is in the same boat, but that's why
we're not upstreaming this part in its current form.


I thought pKVM was only holding off on upstreaming changes related to guest-private memory?

I understood we want to use restricted memfd for giving guest-private memory (Gunyah calls this "lending memory"). When I went through the changes, I gathered KVM is using restricted memfd only for guest-private memory and not for shared memory. Thus, I dropped support for lending memory to the guest VM and only retained the shared memory support in this series. I'd like to merge what we can today and introduce the guest-private memory support in
tandem with the restricted memfd; I don't see much reason to delay the
series.

Right, protected guests will use the new restricted memfd ("guest mem"
now, I think?), but non-protected guests should implement the existing
interface *without* the need for the GUP pin on guest memory pages. Yes,
that means full support for MMU notifiers so that these pages can be
managed properly by the host kernel. We're working on that for pKVM, but
it requires a more flexible form of memory sharing over what we currently
have so that e.g. the zero page can be shared between multiple entities.

Gunyah doesn't support swapping pages out while the guest is running and the design of Gunyah isn't made to give host kernel full control over the S2 page table for its guests. As best I can tell from reading the respective drivers, ACRN and Nitro Enclaves both GUP pin guest memory pages prior to giving them to the guest, so I don't think this requirement from Gunyah is particularly unusual.


I read/dug into mmu notifiers more and I don't think it matches with Gunyah's features today. We don't allow the host to freely manage VM's pages because it requires the guest VM to have a level of trust on the host. Once a page is given to the guest, it's done for the lifetime of the VM. Allowing the host to replace pages in the guest memory map isn't part of any VM's security model that we run in Gunyah. With that requirement, longterm pinning looks like the correct approach to me.

Is my approach of longterm pinning correct given that Gunyah doesn't allow host to freely swap pages?



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux