Re: [RFC PATCH 00/39] 1G page support for guest_memfd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Amit Shah <amit@xxxxxxxxxxxxx> writes:

>> <snip>
>> 
>> Thanks all your help and comments during the guest_memfd upstream
>> calls,
>> and thanks for the help from AMD.
>> 
>> Extending mmap() support from Fuad with 1G page support introduces
>> more
>> states that made it more complicated (at least for me).
>> 
>> I'm modeling the states in python so I can iterate more quickly. I
>> also
>> have usage flows (e.g. allocate, guest_use, host_use,
>> transient_folio_get, close, transient_folio_put) as test cases.
>> 
>> I'm almost done with the model and my next steps are to write up a
>> state
>> machine (like Fuad's [5]) and share that.

Thanks everyone for all the comments at the 2025-02-06 guest_memfd
upstream call! Here are the 

+ Slides: https://lpc.events/event/18/contributions/1764/attachments/1409/3704/guest-memfd-1g-page-support-2025-02-06.pdf
+ State diagram: https://lpc.events/event/18/contributions/1764/attachments/1409/3702/guest-memfd-state-diagram-split-merge-2025-02-06.drawio.svg
+ For those interested in editing the state diagram using draw.io:
  https://lpc.events/event/18/contributions/1764/attachments/1409/3703/guest-memfd-state-diagram-split-merge-2025-02-06.drawio.xml

>> 
>> I'd be happy to share the python model too but I have to work through
>> some internal open-sourcing processes first, so if you think this
>> will
>> be useful, let me know!
>
> No problem.  Yes, I'm interested in this - it'll be helpful!

I've started working through the internal processes and will update here
when I'm done!

>
> The other thing of note is that while we have the kernel patches, a
> userspace to drive them and exercise them is currently missing.

In this and future patch series, I'll have selftests that will exercise
any new functionality.

>
>> Then, I'll code it all up in a new revision of this series (target:
>> March 2025), which will be accompanied by source code on GitHub.
>> 
>> I'm happy to collaborate more closely, let me know if you have ideas
>> for
>> collaboration!
>
> Thank you.  I think currently the bigger problem we have is allocation
> of hugepages -- which is also blocking a lot of the follow-on work. 
> Vishal briefly mentioned isolating pages from Linux entirely last time
> - that's also what I'm interested in to figure out if we can completely
> bypass the allocation problem by not allocating struct pages for non-
> host use pages entirely.  The guest_memfs/KHO/kexec/live-update patches
> also take this approach on AWS (for how their VMs are launched).  If we
> work with those patches together, allocation of 1G hugepages is
> simplified.  I'd like to discuss more on these themes to see if this is
> an approach that helps as well.
>
>
> 		Amit

Vishal is still very interested in this and will probably be looking
into this while I push ahead assuming that KVM continues to use struct
pages. This was also brought up at the guest_memfd upstream call on
2025-02-06, people were interested in this and think that it will
simplify refcounting for merging and splitting.

I'll push ahead assuming that we use hugetlb as the source of 1G pages,
and assuming that KVM continues to use struct pages to describe guest
private memory.

The series will still be useful as an interim solution/prototype even if
other allocators are preferred and get merged. :)




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux