Re: [RFC PATCH 00/39] 1G page support for guest_memfd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Amit Shah <amit@xxxxxxxxxxxxx> writes:

> Hey Ackerley,

Hi Amit,

> On Tue, 2024-09-10 at 23:43 +0000, Ackerley Tng wrote:
>> Hello,
>> 
>> This patchset is our exploration of how to support 1G pages in
>> guest_memfd, and
>> how the pages will be used in Confidential VMs.
>
> We've discussed this patchset at LPC and in the guest-memfd calls.  Can
> you please summarise the discussions here as a follow-up, so we can
> also continue discussing on-list, and not repeat things that are
> already discussed?

Thanks for this question! Since LPC, Vishal and I have been tied up with
some Google internal work, which slowed down progress on 1G page support
for guest_memfd. We will have progress this quarter and the next few
quarters on 1G page support for guest_memfd.

The related updates are

1. No objections on using hugetlb as the source of 1G pages.

2. Prerequisite hugetlb changes.

+ I've separated some of the prerequisite hugetlb changes into another
  patch series hoping to have them merged ahead of and separately from
  this patchset [1].
+ Peter Xu contributed a better patchset, including a bugfix [2].
+ I have an alternative [3].
+ The next revision of this series (1G page support for guest_memfd)
  will be based on alternative [3]. I think there should be no issues
  there.
+ I believe Peter is also waiting on the next revision before we make
  further progress/decide on [2] or [3].

3. No objections for allowing mmap()-ing of guest_memfd physical memory
   when memory is marked shared to avoid double-allocation.

4. No objections for splitting pages when marked shared.

5. folio_put() callback for guest_memfd folio cleanup/merging.

+ In Fuad's series [4], Fuad used the callback to reset the folio's
  mappability status.
+ The catch is that the callback is only invoked when folio->page_type
  == PGTY_guest_memfd, and folio->page_type is a union with folio's
  mapcount, so any folio with a non-zero mapcount cannot have a valid
  page_type.
+ I was concerned that we might not get a callback, and hence
  unintentionally skip merging pages and not correctly restore hugetlb
  pages
+ This was discussed at the last guest_memfd upstream call (2025-01-23
  07:58 PST), and the conclusion is that using folio->page_type works,
  because
    + We only merge folios in two cases: (1) when converting to private
      (2) when truncating folios (removing from filemap).
    + When converting to private, in (1), we can forcibly unmap all the
      converted pages or check if the mapcount is 0, and once mapcount
      is 0 we can install the callback by setting folio->page_type =
      PGTY_guest_memfd
    + When truncating, we will be unmapping the folios anyway, so
      mapcount is also 0 and we can install the callback.

Hope that covers the points that you're referring to. If there are other
parts that you'd like to know the status on, please let me know which
aspects those are!

> Also - as mentioned in those meetings, we at AMD are interested in this
> series along with SEV-SNP support - and I'm also interested in figuring
> out how we collaborate on the evolution of this series.

Thanks all your help and comments during the guest_memfd upstream calls,
and thanks for the help from AMD.

Extending mmap() support from Fuad with 1G page support introduces more
states that made it more complicated (at least for me).

I'm modeling the states in python so I can iterate more quickly. I also
have usage flows (e.g. allocate, guest_use, host_use,
transient_folio_get, close, transient_folio_put) as test cases.

I'm almost done with the model and my next steps are to write up a state
machine (like Fuad's [5]) and share that.

I'd be happy to share the python model too but I have to work through
some internal open-sourcing processes first, so if you think this will
be useful, let me know!

Then, I'll code it all up in a new revision of this series (target:
March 2025), which will be accompanied by source code on GitHub.

I'm happy to collaborate more closely, let me know if you have ideas for
collaboration!

> Thanks,
>
> 		Amit

[1] https://lore.kernel.org/all/cover.1728684491.git.ackerleytng@xxxxxxxxxx/T/
[2] https://lore.kernel.org/all/20250107204002.2683356-1-peterx@xxxxxxxxxx/T/
[3] https://lore.kernel.org/all/diqzjzayz5ho.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/
[4] https://lore.kernel.org/all/20250117163001.2326672-1-tabba@xxxxxxxxxx/T/
[5] https://lpc.events/event/18/contributions/1758/attachments/1457/3699/Guestmemfd%20folio%20state%20page_type.pdf




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux