On Tue, Mar 18, 2025 at 9:20 AM Fuad Tabba <tabba@xxxxxxxxxx> wrote: > ... > +/* > + * Callback function for __folio_put(), i.e., called once all references by the > + * host to the folio have been dropped. This allows gmem to transition the state > + * of the folio to shared with the guest, and allows the hypervisor to continue > + * transitioning its state to private, since the host cannot attempt to access > + * it anymore. > + */ > void kvm_gmem_handle_folio_put(struct folio *folio) > { > - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress."); > + struct address_space *mapping; > + struct xarray *shared_offsets; > + struct inode *inode; > + pgoff_t index; > + void *xval; > + > + mapping = folio->mapping; > + if (WARN_ON_ONCE(!mapping)) > + return; > + > + inode = mapping->host; > + index = folio->index; > + shared_offsets = &kvm_gmem_private(inode)->shared_offsets; > + xval = xa_mk_value(KVM_GMEM_GUEST_SHARED); > + > + filemap_invalidate_lock(inode->i_mapping); As discussed in the guest_memfd upstream, folio_put can happen from atomic context [1], so we need a way to either defer the work outside kvm_gmem_handle_folio_put() (which is very likely needed to handle hugepages and merge operation) or ensure to execute the logic using synchronization primitives that will not sleep. [1] https://elixir.bootlin.com/linux/v6.14-rc6/source/include/linux/mm.h#L1483 > + folio_lock(folio); > + kvm_gmem_restore_pending_folio(folio, inode); > + folio_unlock(folio); > + WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL))); > + filemap_invalidate_unlock(inode->i_mapping); > } > EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put); > > -- > 2.49.0.rc1.451.g8f38331e32-goog >