On 1/17/25 17:29, Fuad Tabba wrote: > Before transitioning a guest_memfd folio to unshared, thereby > disallowing access by the host and allowing the hypervisor to > transition its view of the guest page as private, we need to be > sure that the host doesn't have any references to the folio. > > This patch introduces a new type for guest_memfd folios, and uses > that to register a callback that informs the guest_memfd > subsystem when the last reference is dropped, therefore knowing > that the host doesn't have any remaining references. > > Signed-off-by: Fuad Tabba <tabba@xxxxxxxxxx> > --- > The function kvm_slot_gmem_register_callback() isn't used in this > series. It will be used later in code that performs unsharing of > memory. I have tested it with pKVM, based on downstream code [*]. > It's included in this RFC since it demonstrates the plan to > handle unsharing of private folios. > > [*] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/guestmem-6.13-v5-pkvm <snip> > --- a/virt/kvm/guest_memfd.c > +++ b/virt/kvm/guest_memfd.c > @@ -387,6 +387,28 @@ enum folio_mappability { > KVM_GMEM_NONE_MAPPABLE = 0b11, /* Not mappable, transient state. */ > }; > > +/* > + * Unregisters the __folio_put() callback from the folio. > + * > + * Restores a folio's refcount after all pending references have been released, > + * and removes the folio type, thereby removing the callback. Now the folio can > + * be freed normaly once all actual references have been dropped. > + * > + * Must be called with the filemap (inode->i_mapping) invalidate_lock held. > + * Must also have exclusive access to the folio: folio must be either locked, or > + * gmem holds the only reference. > + */ > +static void __kvm_gmem_restore_pending_folio(struct folio *folio) > +{ > + if (WARN_ON_ONCE(folio_mapped(folio) || !folio_test_guestmem(folio))) > + return; > + > + WARN_ON_ONCE(!folio_test_locked(folio) && folio_ref_count(folio) > 1); Similar to Kirill's objection on the other patch, I think there might be a speculative refcount increase (i.e. from a pfn scanner) as long as we have refcount over 1. Probably not a problem here if we want to restore refcount anyway? But the warning would be spurious. > + > + __folio_clear_guestmem(folio); > + folio_ref_add(folio, folio_nr_pages(folio)); > +} > + > /* > * Marks the range [start, end) as mappable by both the host and the guest. > * Usually called when guest shares memory with the host. > @@ -400,7 +422,31 @@ static int gmem_set_mappable(struct inode *inode, pgoff_t start, pgoff_t end) > > filemap_invalidate_lock(inode->i_mapping); > for (i = start; i < end; i++) { > + struct folio *folio = NULL; > + > + /* > + * If the folio is NONE_MAPPABLE, it indicates that it is > + * transitioning to private (GUEST_MAPPABLE). Transition it to > + * shared (ALL_MAPPABLE) immediately, and remove the callback. > + */ > + if (xa_to_value(xa_load(mappable_offsets, i)) == KVM_GMEM_NONE_MAPPABLE) { > + folio = filemap_lock_folio(inode->i_mapping, i); > + if (WARN_ON_ONCE(IS_ERR(folio))) { > + r = PTR_ERR(folio); > + break; > + } > + > + if (folio_test_guestmem(folio)) > + __kvm_gmem_restore_pending_folio(folio); > + } > + > r = xa_err(xa_store(mappable_offsets, i, xval, GFP_KERNEL)); > + > + if (folio) { > + folio_unlock(folio); > + folio_put(folio); > + } > + > if (r) > break; > } > @@ -473,6 +519,105 @@ static int gmem_clear_mappable(struct inode *inode, pgoff_t start, pgoff_t end) > return r; > } > > +/* > + * Registers a callback to __folio_put(), so that gmem knows that the host does > + * not have any references to the folio. It does that by setting the folio type > + * to guestmem. > + * > + * Returns 0 if the host doesn't have any references, or -EAGAIN if the host > + * has references, and the callback has been registered. Note this comment. > + * > + * Must be called with the following locks held: > + * - filemap (inode->i_mapping) invalidate_lock > + * - folio lock > + */ > +static int __gmem_register_callback(struct folio *folio, struct inode *inode, pgoff_t idx) > +{ > + struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; > + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); > + int refcount; > + > + rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock); > + WARN_ON_ONCE(!folio_test_locked(folio)); > + > + if (folio_mapped(folio) || folio_test_guestmem(folio)) > + return -EAGAIN; But here we return -EAGAIN and no callback was registered? > + > + /* Register a callback first. */ > + __folio_set_guestmem(folio); > + > + /* > + * Check for references after setting the type to guestmem, to guard > + * against potential races with the refcount being decremented later. > + * > + * At least one reference is expected because the folio is locked. > + */ > + > + refcount = folio_ref_sub_return(folio, folio_nr_pages(folio)); > + if (refcount == 1) { > + int r; > + > + /* refcount isn't elevated, it's now faultable by the guest. */ Again this seems racy, somebody could have just speculatively increased it. Maybe we need to freeze here as well? > + r = WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, idx, xval_guest, GFP_KERNEL))); > + if (!r) > + __kvm_gmem_restore_pending_folio(folio); > + > + return r; > + } > + > + return -EAGAIN; > +} > + > +int kvm_slot_gmem_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) > +{ > + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; > + struct inode *inode = file_inode(slot->gmem.file); > + struct folio *folio; > + int r; > + > + filemap_invalidate_lock(inode->i_mapping); > + > + folio = filemap_lock_folio(inode->i_mapping, pgoff); > + if (WARN_ON_ONCE(IS_ERR(folio))) { > + r = PTR_ERR(folio); > + goto out; > + } > + > + r = __gmem_register_callback(folio, inode, pgoff); > + > + folio_unlock(folio); > + folio_put(folio); > +out: > + filemap_invalidate_unlock(inode->i_mapping); > + > + return r; > +} > + > +/* > + * Callback function for __folio_put(), i.e., called when all references by the > + * host to the folio have been dropped. This allows gmem to transition the state > + * of the folio to mappable by the guest, and allows the hypervisor to continue > + * transitioning its state to private, since the host cannot attempt to access > + * it anymore. > + */ > +void kvm_gmem_handle_folio_put(struct folio *folio) > +{ > + struct xarray *mappable_offsets; > + struct inode *inode; > + pgoff_t index; > + void *xval; > + > + inode = folio->mapping->host; > + index = folio->index; > + mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; > + xval = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); > + > + filemap_invalidate_lock(inode->i_mapping); > + __kvm_gmem_restore_pending_folio(folio); > + WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, index, xval, GFP_KERNEL))); > + filemap_invalidate_unlock(inode->i_mapping); > +} > + > static bool gmem_is_mappable(struct inode *inode, pgoff_t pgoff) > { > struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets;