Alistair Popple wrote: > In preparation for using insert_page() for DAX, enhance > insert_page_into_pte_locked() to handle establishing writable > mappings. Recall that DAX returns VM_FAULT_NOPAGE after installing a > PTE which bypasses the typical set_pte_range() in finish_fault. > > Signed-off-by: Alistair Popple <apopple@xxxxxxxxxx> > Suggested-by: Dan Williams <dan.j.williams@xxxxxxxxx> > > --- > > Changes for v5: > - Minor comment/formatting fixes suggested by David Hildenbrand > > Changes since v2: > > - New patch split out from "mm/memory: Add dax_insert_pfn" > --- > mm/memory.c | 37 +++++++++++++++++++++++++++++-------- > 1 file changed, 29 insertions(+), 8 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 06bb29e..8531acb 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2126,19 +2126,40 @@ static int validate_page_before_insert(struct vm_area_struct *vma, > } > > static int insert_page_into_pte_locked(struct vm_area_struct *vma, pte_t *pte, > - unsigned long addr, struct page *page, pgprot_t prot) > + unsigned long addr, struct page *page, > + pgprot_t prot, bool mkwrite) > { > struct folio *folio = page_folio(page); > + pte_t entry = ptep_get(pte); > pte_t pteval; > > - if (!pte_none(ptep_get(pte))) > - return -EBUSY; > + if (!pte_none(entry)) { > + if (!mkwrite) > + return -EBUSY; > + > + /* see insert_pfn(). */ > + if (pte_pfn(entry) != page_to_pfn(page)) { > + WARN_ON_ONCE(!is_zero_pfn(pte_pfn(entry))); > + return -EFAULT; > + } > + entry = maybe_mkwrite(entry, vma); > + entry = pte_mkyoung(entry); > + if (ptep_set_access_flags(vma, addr, pte, entry, 1)) > + update_mmu_cache(vma, addr, pte); > + return 0; > + } This hunk feels like it is begging to be unified with insert_pfn() after pfn_t dies. Perhaps a TODO to remember to come back and unify them, or you can go append that work to your pfn_t removal series? Other than that you can add: Reviewed-by: Dan Williams <dan.j.williams@xxxxxxxxx>