Hi David, On Fri, Oct 25, 2024 at 11:07:27PM -0700, David Rientjes wrote: > On Wed, 16 Oct 2024, David Rientjes wrote: > > > ----->o----- > > My takeaway: based on the feedback that was provided in the discussion: > > > > - we need an allocator abstraction for persistent memory that can return > > memory with various characteristics: 1GB or not, kernel direct map or > > not, HVO or not, etc. > > > > - built on top of that, we need the ability to carve out very large > > ranges of memory (cloud provider use case) with NUMA awareness on the > > kernel command line > > > > Following up on this, I think this physical memory allocator would also be > possible to use as a backend for hugetlb. Hopefully this would be an > allocator that would be generally useful for multiple purposes, something > like a mm/phys_alloc.c. Can you elaborate on this? mm/page_alloc.c already allocates physical memory :) Or you mean an allocator that will deal with memory carved out from what page allocator manages? > Frank van der Linden may also have thoughts on the above? > > > - we also need the ability to be able to dynamically resize this or > > provide hints at allocation time that memory must be persisted across > > kexec to support the non-cloud provider use case > > > > - we need a filesystem abstraction that map memory of the type that is > > requested, including guest_memfd and then deal with all the fun of > > multitenancy since it would be drawing from a finite per-NUMA node > > pool of persistent memory > > > > - absolutely critical to this discussion is defining what is the core > > infrastructure that is required for a generally acceptable solution > > and then what builds off of that to be more special cased (like the > > cloud provider use case or persistent tmpfs use case) > > > > We're looking to continue that discussion here and then come together > > again in a few weeks. > > > > We'll be looking to schedule some more time to talk about this topic in > the Wednesday, November 13 instance of the Linux MM Alignment Session. > > After that, I think it would be quite useful to break out the set of > people that are interested in persisting guest memory across kexec and KHO > into a separate series to accelerate discussion and next stpes. Getting > the requirements and design locked down are critical, so happy to > facilitate that to any extent possible and welcome everybody interested in > discussing it. > > James, for the guestmemfs discussions, would this work for you? > > Alexander, same question for you regarding the KHO work? > > It's a global community, so the timing won't work for eveyrbody. We'd > plan on sending out summaries of these discussions, such as in this email, > to solicit feedback and ideas from everybody. > > If you're not on the To: or Cc: list already, please email me separatel if > you're interested in participating and then we can find a regular time. > > This is exciting! > -- Sincerely yours, Mike. _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec