On Tue, Sep 10, 2024 at 11:43:45PM +0000, Ackerley Tng wrote: > +static int kvm_gmem_hugetlb_filemap_remove_folios(struct address_space *mapping, > + struct hstate *h, > + loff_t lstart, loff_t lend) > +{ > + const pgoff_t end = lend >> PAGE_SHIFT; > + pgoff_t next = lstart >> PAGE_SHIFT; > + struct folio_batch fbatch; > + int num_freed = 0; > + > + folio_batch_init(&fbatch); > + while (filemap_get_folios(mapping, &next, end - 1, &fbatch)) { > + int i; > + for (i = 0; i < folio_batch_count(&fbatch); ++i) { > + struct folio *folio; > + pgoff_t hindex; > + u32 hash; > + > + folio = fbatch.folios[i]; > + hindex = folio->index >> huge_page_order(h); > + hash = hugetlb_fault_mutex_hash(mapping, hindex); > + > + mutex_lock(&hugetlb_fault_mutex_table[hash]); I'm debugging some issue and this caught my attention. IIUC we need to unmap the last time here with the fault mutex, right? Something like: unmap_mapping_range(mapping, lstart, lend, 0); Otherwise I don't know what protects a concurrent fault from happening when removing the folio from the page cache simultaneously. Could refer to remove_inode_single_folio() for hugetlbfs. For generic folios, it normally needs the folio lock when unmap, iiuc, but here the mutex should be fine. So far, even with the line added, my issue still didn't yet go away. However I figured I should raise this up here anyway at least as a pure question. > + kvm_gmem_hugetlb_filemap_remove_folio(folio); > + mutex_unlock(&hugetlb_fault_mutex_table[hash]); > + > + num_freed++; > + } > + folio_batch_release(&fbatch); > + cond_resched(); > + } > + > + return num_freed; > +} -- Peter Xu