+static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) +{ + struct folio *folio; + + folio = kvm_gmem_get_folio(file_inode(vmf->vma->vm_file), vmf->pgoff); + if (!folio) + return VM_FAULT_SIGBUS; + + /* + * Check if the page is allowed to be faulted to the host, with the + * folio lock held to ensure that the check and incrementing the page + * count are protected by the same folio lock. + */ + if (!kvm_gmem_isfaultable(vmf)) { + folio_unlock(folio); + return VM_FAULT_SIGBUS; + } + + vmf->page = folio_file_page(folio, vmf->pgoff);
We won't currently get hugetlb (or even THP) here. It mimics what shmem would do.
finish_fault->set_pte_range() will call folio_add_file_rmap_ptes(), getting the rmap involved.
Do we have some tests in place that make sure that fallocate(FALLOC_FL_PUNCH_HOLE|FALLOC_FL_KEEP_SIZE) will properly unmap the page again (IOW, that the rmap does indeed work?).
-- Cheers, David / dhildenb