On Tue, Jul 16, 2024 at 10:34:55AM -0700, Sean Christopherson wrote: > On Tue, Jul 16, 2024, Jason Gunthorpe wrote: > > On Tue, Jul 16, 2024 at 09:03:00AM -0700, Sean Christopherson wrote: > > > > > > + To support huge pages, guest_memfd will take ownership of the hugepages, and > > > > provide interested parties (userspace, KVM, iommu) with pages to be used. > > > > + guest_memfd will track usage of (sub)pages, for both private and shared > > > > memory > > > > + Pages will be broken into smaller (probably 4K) chunks at creation time to > > > > simplify implementation (as opposed to splitting at runtime when private to > > > > shared conversion is requested by the guest) > > > > > > FWIW, I doubt we'll ever release a version with mmap()+guest_memfd support that > > > shatters pages at creation. I can see it being an intermediate step, e.g. to > > > prove correctness and provide a bisection point, but shattering hugepages at > > > creation would effectively make hugepage support useless. > > > > Why? If the private memory retains its contiguity seperately but the > > struct pages are removed from the vmemmap, what is the downside? > > Oooh, you're talking about shattering only the host userspace mappings. Now I > understand why there was a bit of a disconnect, I was thinking you (hand-wavy > everyone) were saying that KVM would immediately shatter its own mappings too. Right, I'm imagining that guestmemfd keep track of the physical ranges in something else, like a maple tree, xarray or heck a SW radix page table perhaps. It does not use struct pages. Then it has, say, a bitmap indicating what 4k granuals are shared. When kvm or the private world needs the physical addresses it reads them out of that structure and it always sees perfectly physically contiguous data regardless of any shared/private stuff. It is not so much "broken at creation time", but more that guest memfd does not use struct pages at all for private mappings and thus we can setup the unused struct pages however we like, including removing them from the vmemmap or preconfiguring them for order 0 granuals. There is definitely some detailed datastructure work here to allow guestmemfd to manage all of this efficiently and be effective for 4k and 1G cases. Jason