On Wed, Nov 1, 2023 at 9:55 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote: > > On Wed, Nov 01, 2023, Fuad Tabba wrote: > > > > > @@ -1034,6 +1034,9 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) > > > > > /* This does not remove the slot from struct kvm_memslots data structures */ > > > > > static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) > > > > > { > > > > > + if (slot->flags & KVM_MEM_PRIVATE) > > > > > + kvm_gmem_unbind(slot); > > > > > + > > > > > > > > Should this be called after kvm_arch_free_memslot()? Arch-specific ode > > > > might need some of the data before the unbinding, something I thought > > > > might be necessary at one point for the pKVM port when deleting a > > > > memslot, but realized later that kvm_invalidate_memslot() -> > > > > kvm_arch_guest_memory_reclaimed() was the more logical place for it. > > > > Also, since that seems to be the pattern for arch-specific handlers in > > > > KVM. > > > > > > Maybe? But only if we can about symmetry between the allocation and free paths > > > I really don't think kvm_arch_free_memslot() should be doing anything beyond a > > > "pure" free. E.g. kvm_arch_free_memslot() is also called after moving a memslot, > > > which hopefully we never actually have to allow for guest_memfd, but any code in > > > kvm_arch_free_memslot() would bring about "what if" questions regarding memslot > > > movement. I.e. the API is intended to be a "free arch metadata associated with > > > the memslot". > > > > > > Out of curiosity, what does pKVM need to do at kvm_arch_guest_memory_reclaimed()? > > > > It's about the host reclaiming ownership of guest memory when tearing > > down a protected guest. In pKVM, we currently teardown the guest and > > reclaim its memory when kvm_arch_destroy_vm() is called. The problem > > with guestmem is that kvm_gmem_unbind() could get called before that > > happens, after which the host might try to access the unbound guest > > memory. Since the host hasn't reclaimed ownership of the guest memory > > from hyp, hilarity ensues (it crashes). > > > > Initially, I hooked reclaim guest memory to kvm_free_memslot(), but > > then I needed to move the unbind later in the function. I realized > > later that kvm_arch_guest_memory_reclaimed() gets called earlier (at > > the right time), and is more aptly named. > > Aha! I suspected that might be the case. > > TDX and SNP also need to solve the same problem of "reclaiming" memory before it > can be safely accessed by the host. The plan is to add an arch hook (or two?) > into guest_memfd that is invoked when memory is freed from guest_memfd. > > Hooking kvm_arch_guest_memory_reclaimed() isn't completely correct as deleting a > memslot doesn't *guarantee* that guest memory is actually reclaimed (which reminds > me, we need to figure out a better name for that thing before introducing > kvm_arch_gmem_invalidate()). I see. I'd assumed that that was what you're using. I agree that it's not completely correct, so for the moment, I assume that if that happens we have a misbehaving host, teardown the guest and reclaim its memory. > The effective false positives aren't fatal for the current usage because the hook > is used only for x86 SEV guests to flush caches. An unnecessary flush can cause > performance issues, but it doesn't affect correctness. For TDX and SNP, and IIUC > pKVM, false positives are fatal because KVM could assign memory back to the host > that is still owned by guest_memfd. Yup. > E.g. a misbehaving userspace could prematurely delete a memslot. And the more > fun example is intrahost migration, where the plan is to allow pointing multiple > guest_memfd files at a single guest_memfd inode: > https://lore.kernel.org/all/cover.1691446946.git.ackerleytng@xxxxxxxxxx > > There was a lot of discussion for this, but it's scattered all over the place. > The TL;DR is is that the inode will represent physical memory, and a file will > represent a given "struct kvm" instance's view of that memory. And so the memory > isn't reclaimed until the inode is truncated/punched. > > I _think_ this reflects the most recent plan from the guest_memfd side: > https://lore.kernel.org/all/1233d749211c08d51f9ca5d427938d47f008af1f.1689893403.git.isaku.yamahata@xxxxxxxxx Thanks for pointing that out. I think this might be the way to go. I'll have a closer look at this and see how to get it to work with pKVM. Cheers, /fuad