On Mon, Feb 06, 2023 at 10:06:38PM +0800, Yin Fengwei wrote: > diff --git a/include/linux/mm.h b/include/linux/mm.h > index d6f8f41514cc..93192f04b276 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -1162,6 +1162,9 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma) > > vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page); > void do_set_pte(struct vm_fault *vmf, struct page *page, unsigned long addr); > +void do_set_pte_range(struct vm_fault *vmf, struct folio *folio, > + unsigned long addr, pte_t *pte, > + unsigned long start, unsigned int nr); There are only two callers of do_set_pte(), and they're both in mm. I don't think we should retain do_set_pte() as a wrapper, but rather change both callers to call 'set_pte_range()'. The 'do' doesn't add any value, so let's drop that word. > + if (!cow) { > + folio_add_file_rmap_range(folio, start, nr, vma, false); > + add_mm_counter(vma->vm_mm, mm_counter_file(page), nr); > + } else { > + /* > + * rmap code is not ready to handle COW with anonymous > + * large folio yet. Capture and warn if large folio > + * is given. > + */ > + VM_WARN_ON_FOLIO(folio_test_large(folio), folio); > + } The handling of cow pages is still very clunky. folio_add_new_anon_rmap() handles anonymous large folios just fine. I think David was looking at current code, not the code in mm-next. > + set_pte_at(vma->vm_mm, addr, pte, entry); > + > + /* no need to invalidate: a not-present page won't be cached */ > + update_mmu_cache(vma, addr, pte); > + } while (pte++, page++, addr += PAGE_SIZE, --nr > 0); There's no need to speed-run this. Let's do it properly and get the arch interface right. This code isn't going to hit linux-next for four more weeks, which is plenty of time.