On Wed, Dec 11, 2024 at 10:37 PM Michael Roth <michael.roth@xxxxxxx> wrote: > > This patchset is also available at: > > https://github.com/amdese/linux/commits/snp-prepare-thp-rfc1 > > and is based on top of Paolo's kvm-coco-queue-2024-11 tag which includes > a snapshot of his patches[1] to provide tracking of whether or not > sub-pages of a huge folio need to have kvm_arch_gmem_prepare() hooks issued > before guest access: > > d55475f23cea KVM: gmem: track preparedness a page at a time > 64b46ca6cd6d KVM: gmem: limit hole-punching to ranges within the file > 17df70a5ea65 KVM: gmem: add a complete set of functions to query page preparedness > e3449f6841ef KVM: gmem: allocate private data for the gmem inode > > [1] https://lore.kernel.org/lkml/20241108155056.332412-1-pbonzini@xxxxxxxxxx/ > > This series addresses some of the pending review comments for those patches > (feel free to squash/rework as-needed), and implements a first real user in > the form of a reworked version of Sean's original 2MB THP support for gmem. > Looking at the work targeted by Fuad to add in-place memory conversion support via [1] and Ackerley in future to address hugetlb page support, can the state tracking for preparedness be simplified as? i) prepare guest memfd ranges when "first time an offset with mappability = GUEST is allocated or first time an allocated offset has mappability = GUEST". Some scenarios that would lead to guest memfd range preparation: - Create file with default mappability to host, fallocate, convert - Create file with default mappability to Guest, guest faults on private memory ii) Unprepare guest memfd ranges when "first time an offset with mappability = GUEST is deallocated or first time an allocated offset has lost mappability = GUEST attribute", some scenarios that would lead to guest memfd range unprepare: - Truncation - Conversion iii) To handle scenarios with hugepages, page splitting/merging in guest memfd can also signal change in page granularities. [1] https://lore.kernel.org/kvm/20250117163001.2326672-1-tabba@xxxxxxxxxx/