Hi David, On Thu, Feb 22, 2024 at 4:28 PM David Hildenbrand <david@xxxxxxxxxx> wrote: > > > +static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) > > +{ > > + struct folio *folio; > > + > > + folio = kvm_gmem_get_folio(file_inode(vmf->vma->vm_file), vmf->pgoff); > > + if (!folio) > > + return VM_FAULT_SIGBUS; > > + > > + /* > > + * Check if the page is allowed to be faulted to the host, with the > > + * folio lock held to ensure that the check and incrementing the page > > + * count are protected by the same folio lock. > > + */ > > + if (!kvm_gmem_isfaultable(vmf)) { > > + folio_unlock(folio); > > + return VM_FAULT_SIGBUS; > > + } > > + > > + vmf->page = folio_file_page(folio, vmf->pgoff); > > We won't currently get hugetlb (or even THP) here. It mimics what shmem > would do. At the moment there isn't hugetlb support in guest_memfd(), and neither in pKVM. Although we do plan on supporting it. > finish_fault->set_pte_range() will call folio_add_file_rmap_ptes(), > getting the rmap involved. > > Do we have some tests in place that make sure that > fallocate(FALLOC_FL_PUNCH_HOLE|FALLOC_FL_KEEP_SIZE) will properly unmap > the page again (IOW, that the rmap does indeed work?). I'm not sure if you mean kernel tests, or if I've tested it. There are guest_memfd() tests for fallocate(FALLOC_FL_PUNCH_HOLE|FALLOC_FL_KEEP_SIZE) , which I have run. I've also tested it manually with sample programs, and it behaves as expected. Otherwise, for gunyah Elliot has used folio_mmapped() [], but Matthew doesn't think that it would do what we'd like it to do, i.e., ensure that _noone_ can fault in the page [2] I would appreciate any ideas, comments, or suggestions regarding this. Thanks! /fuad [1] https://lore.kernel.org/all/20240222141602976-0800.eberman@xxxxxxxxxxxxxxxxxxxxxxxxxx/ [2] https://lore.kernel.org/all/ZdfoR3nCEP3HTtm1@xxxxxxxxxxxxxxxxxxxx/ > -- > Cheers, > > David / dhildenb >