On Mon, Jul 17, 2023 at 03:31:09PM +0100, Ryan Roberts wrote: > +/* > + * folio_remove_rmap_range - take down pte mappings from a range of pages > + * belonging to a folio. All pages are accounted as small pages. > + * @folio: folio that all pages belong to > + * @page: first page in range to remove mapping from > + * @nr: number of pages in range to remove mapping from > + * @vma: the vm area from which the mapping is removed > + * > + * The caller needs to hold the pte lock. > + */ This could stand a little reworking. How about this? /** * folio_remove_rmap_range - Take down PTE mappings from a range of pages. * @folio: Folio containing all pages in range. * @page: First page in range to unmap. * @nr: Number of pages to unmap. * @vma: The VM area containing the range. * * All pages in the range must belong to the same VMA & folio. They * must be mapped with PTEs, not a PMD. * * Context: Caller holds the pte lock. */ > +void folio_remove_rmap_range(struct folio *folio, struct page *page, > + int nr, struct vm_area_struct *vma) > +{ > + atomic_t *mapped = &folio->_nr_pages_mapped; > + int nr_unmapped = 0; > + int nr_mapped; > + bool last; > + enum node_stat_item idx; > + > + if (unlikely(folio_test_hugetlb(folio))) { > + VM_WARN_ON_FOLIO(1, folio); > + return; > + } > + > + if (!folio_test_large(folio)) { > + /* Is this the page's last map to be removed? */ > + last = atomic_add_negative(-1, &page->_mapcount); > + nr_unmapped = last; > + } else { > + for (; nr != 0; nr--, page++) { > + /* Is this the page's last map to be removed? */ > + last = atomic_add_negative(-1, &page->_mapcount); > + if (last) { > + /* Page still mapped if folio mapped entirely */ > + nr_mapped = atomic_dec_return_relaxed(mapped); We're still doing one atomic op per page on the folio's nr_pages_mapped ... is it possible to batch this and use atomic_sub_return_relaxed()?