On Tue, Feb 11, 2025 at 12:11:19PM +0000, Fuad Tabba wrote: > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig > index 54e959e7d68f..4e759e8020c5 100644 > --- a/virt/kvm/Kconfig > +++ b/virt/kvm/Kconfig > @@ -124,3 +124,7 @@ config HAVE_KVM_ARCH_GMEM_PREPARE > config HAVE_KVM_ARCH_GMEM_INVALIDATE > bool > depends on KVM_PRIVATE_MEM > + > +config KVM_GMEM_SHARED_MEM > + select KVM_PRIVATE_MEM > + bool No strong opinion here, but this might not be straightforward enough for any reader to know why a shared mem option will select a private mem.. I wonder would it be clearer if we could have a config for gmem alone, and select that option no matter how gmem would be consumed. Then the two options above could select it. I'm not sure whether there're too many guest-memfd stuff hard-coded to PRIVATE_MEM, actually that's what I hit myself both in qemu & kvm when I wanted to try guest-memfd on QEMU as purely shared (aka no conversions, no duplicated backends, but in-place). So pretty much a pure question to ask here. The other thing is, currently guest-memfd binding only allows 1:1 binding to kvm memslots for a specific offset range of gmem, rather than being able to be mapped in multiple memslots: kvm_gmem_bind(): if (!xa_empty(&gmem->bindings) && xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) { filemap_invalidate_unlock(inode->i_mapping); goto err; } I didn't dig further yet, but I feel like this won't trivially work with things like SMRAM when in-place, which can map the same portion of a gmem range more than once. I wonder if this is a hard limit for guest-memfd, and whether you hit anything similar when working on this series. Thanks, -- Peter Xu