On 23/01/2024 14:13, David Hildenbrand wrote: >>> Although now I'm wondering if there is a race here... What happens if a page in >>> the parent becomes dirty after you have checked it but before you write protect >>> it? Isn't that already a problem with the current non-batched version? Why do we >>> even to preserve dirty in the child for private mappings? >> >> I suspect, because the parent could zap the anon folio. If the folio is >> clean, but the PTE dirty, I suspect that we could lose data of the child >> if we were to evict that clean folio (swapout). >> >> So I assume we simply copy the dirty PTE bit, so the system knows that >> that folio is actually dirty, because one PTE is dirty. > > Oh, and regarding your race concern: it's undefined which page state > would see if some write is racing with fork, so it also doesn't matter > if we would copy the PTE dirty bit or not, if it gets set in a racy fashion. Ahh that makes sense. Thanks. > > I'll not experiment with: Looks good as long as its still performant. > > From 14e83ff2a422a96ce5701f9c8454a49f9ed947e3 Mon Sep 17 00:00:00 2001 > From: David Hildenbrand <david@xxxxxxxxxx> > Date: Sat, 30 Dec 2023 12:54:35 +0100 > Subject: [PATCH] mm/memory: ignore dirty/accessed/soft-dirty bits in > folio_pte_batch() > > Let's always ignore the accessed/young bit: we'll always mark the PTE > as old in our child process during fork, and upcoming users will > similarly not care. > > Ignore the dirty bit only if we don't want to duplicate the dirty bit > into the child process during fork. Maybe, we could just set all PTEs > in the child dirty if any PTE is dirty. For now, let's keep the behavior > unchanged. > > Ignore the soft-dirty bit only if the bit doesn't have any meaning in > the src vma. > > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> > --- > mm/memory.c | 34 ++++++++++++++++++++++++++++++---- > 1 file changed, 30 insertions(+), 4 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 7690994929d26..9aba1b0e871ca 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -953,24 +953,44 @@ static __always_inline void __copy_present_ptes(struct > vm_area_struct *dst_vma, > set_ptes(dst_vma->vm_mm, addr, dst_pte, pte, nr); > } > > +/* Flags for folio_pte_batch(). */ > +typedef int __bitwise fpb_t; > + > +/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */ > +#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0)) > + > +/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */ > +#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1)) > + > +static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) > +{ > + if (flags & FPB_IGNORE_DIRTY) > + pte = pte_mkclean(pte); > + if (likely(flags & FPB_IGNORE_SOFT_DIRTY)) > + pte = pte_clear_soft_dirty(pte); > + return pte_mkold(pte); > +} > + > /* > * Detect a PTE batch: consecutive (present) PTEs that map consecutive > * pages of the same folio. > * > * All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN. > + * the accessed bit, dirty bit (with FPB_IGNORE_DIRTY) and soft-dirty bit > + * (with FPB_IGNORE_SOFT_DIRTY). > */ > static inline int folio_pte_batch(struct folio *folio, unsigned long addr, > - pte_t *start_ptep, pte_t pte, int max_nr) > + pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags) > { > unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); > const pte_t *end_ptep = start_ptep + max_nr; > - pte_t expected_pte = pte_next_pfn(pte); > + pte_t expected_pte = __pte_batch_clear_ignored(pte_next_pfn(pte), flags); > pte_t *ptep = start_ptep + 1; > > VM_WARN_ON_FOLIO(!pte_present(pte), folio); > > while (ptep != end_ptep) { > - pte = ptep_get(ptep); > + pte = __pte_batch_clear_ignored(ptep_get(ptep), flags); > > if (!pte_same(pte, expected_pte)) > break; > @@ -1004,6 +1024,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct > vm_area_struct *src_vma > { > struct page *page; > struct folio *folio; > + fpb_t flags = 0; > int err, nr; > > page = vm_normal_page(src_vma, addr, pte); > @@ -1018,7 +1039,12 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct > vm_area_struct *src_vma > * by keeping the batching logic separate. > */ > if (unlikely(!*prealloc && folio_test_large(folio) && max_nr != 1)) { > - nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr); > + if (src_vma->vm_flags & VM_SHARED) > + flags |= FPB_IGNORE_DIRTY; > + if (!vma_soft_dirty_enabled(src_vma)) > + flags |= FPB_IGNORE_SOFT_DIRTY; > + > + nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags); > folio_ref_add(folio, nr); > if (folio_test_anon(folio)) { > if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page,