Re: [PATCH v2 1/1] mm/madvise: enhance lazyfreeing with mTHP in madvise_free

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 7, 2024 at 9:00 PM Lance Yang <ioworker0@xxxxxxxxx> wrote:
>
> Hey Barry,
>
> Thanks for taking time to review!
>
> On Thu, Mar 7, 2024 at 3:00 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> >
> > On Thu, Mar 7, 2024 at 7:15 PM Lance Yang <ioworker0@xxxxxxxxx> wrote:
> > >
> [...]
> > > +static inline bool can_mark_large_folio_lazyfree(unsigned long addr,
> > > +                                                struct folio *folio, pte_t *start_pte)
> > > +{
> > > +       int nr_pages = folio_nr_pages(folio);
> > > +       fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> > > +
> > > +       for (int i = 0; i < nr_pages; i++)
> > > +               if (page_mapcount(folio_page(folio, i)) != 1)
> > > +                       return false;
> >
> > we have moved to folio_estimated_sharers though it is not precise, so
> > we don't do
> > this check with lots of loops and depending on the subpage's mapcount.
>
> If we don't check the subpage’s mapcount, and there is a cow folio associated
> with this folio and the cow folio has smaller size than this folio,
> should we still
> mark this folio as lazyfree?

I agree, this is true. However, we've somehow accepted the fact that
folio_likely_mapped_shared
can result in false negatives or false positives to balance the
overhead.  So I really don't know :-)

Maybe David and Vishal can give some comments here.

>
> > BTW, do we need to rebase our work against David's changes[1]?
> > [1] https://lore.kernel.org/linux-mm/20240227201548.857831-1-david@xxxxxxxxxx/
>
> Yes, we should rebase our work against David’s changes.
>
> >
> > > +
> > > +       return nr_pages == folio_pte_batch(folio, addr, start_pte,
> > > +                                        ptep_get(start_pte), nr_pages, flags, NULL);
> > > +}
> > > +
> > >  static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> > >                                 unsigned long end, struct mm_walk *walk)
> > >
> > > @@ -676,11 +690,45 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> > >                  */
> > >                 if (folio_test_large(folio)) {
> > >                         int err;
> > > +                       unsigned long next_addr, align;
> > >
> > > -                       if (folio_estimated_sharers(folio) != 1)
> > > -                               break;
> > > -                       if (!folio_trylock(folio))
> > > -                               break;
> > > +                       if (folio_estimated_sharers(folio) != 1 ||
> > > +                           !folio_trylock(folio))
> > > +                               goto skip_large_folio;
> >
> >
> > I don't think we can skip all the PTEs for nr_pages, as some of them might be
> > pointing to other folios.
> >
> > for example, for a large folio with 16PTEs, you do MADV_DONTNEED(15-16),
> > and write the memory of PTE15 and PTE16, you get page faults, thus PTE15
> > and PTE16 will point to two different small folios. We can only skip when we
> > are sure nr_pages == folio_pte_batch() is sure.
>
> Agreed. Thanks for pointing that out.
>
> >
> > > +
> > > +                       align = folio_nr_pages(folio) * PAGE_SIZE;
> > > +                       next_addr = ALIGN_DOWN(addr + align, align);
> > > +
> > > +                       /*
> > > +                        * If we mark only the subpages as lazyfree, or
> > > +                        * cannot mark the entire large folio as lazyfree,
> > > +                        * then just split it.
> > > +                        */
> > > +                       if (next_addr > end || next_addr - addr != align ||
> > > +                           !can_mark_large_folio_lazyfree(addr, folio, pte))
> > > +                               goto split_large_folio;
> > > +
> > > +                       /*
> > > +                        * Avoid unnecessary folio splitting if the large
> > > +                        * folio is entirely within the given range.
> > > +                        */
> > > +                       folio_clear_dirty(folio);
> > > +                       folio_unlock(folio);
> > > +                       for (; addr != next_addr; pte++, addr += PAGE_SIZE) {
> > > +                               ptent = ptep_get(pte);
> > > +                               if (pte_young(ptent) || pte_dirty(ptent)) {
> > > +                                       ptent = ptep_get_and_clear_full(
> > > +                                               mm, addr, pte, tlb->fullmm);
> > > +                                       ptent = pte_mkold(ptent);
> > > +                                       ptent = pte_mkclean(ptent);
> > > +                                       set_pte_at(mm, addr, pte, ptent);
> > > +                                       tlb_remove_tlb_entry(tlb, pte, addr);
> > > +                               }
> >
> > Can we do this in batches? for a CONT-PTE mapped large folio, you are unfolding
> > and folding again. It seems quite expensive.
>
> Thanks for your suggestion. I'll do this in batches in v3.
>
> Thanks again for your time!
>
> Best,
> Lance
>
> >
> > > +                       }
> > > +                       folio_mark_lazyfree(folio);
> > > +                       goto next_folio;
> > > +
> > > +split_large_folio:
> > >                         folio_get(folio);
> > >                         arch_leave_lazy_mmu_mode();
> > >                         pte_unmap_unlock(start_pte, ptl);
> > > @@ -688,13 +736,28 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
> > >                         err = split_folio(folio);
> > >                         folio_unlock(folio);
> > >                         folio_put(folio);
> > > -                       if (err)
> > > -                               break;
> > > -                       start_pte = pte =
> > > -                               pte_offset_map_lock(mm, pmd, addr, &ptl);
> > > -                       if (!start_pte)
> > > -                               break;
> > > -                       arch_enter_lazy_mmu_mode();
> > > +
> > > +                       /*
> > > +                        * If the large folio is locked or cannot be split,
> > > +                        * we just skip it.
> > > +                        */
> > > +                       if (err) {
> > > +skip_large_folio:
> > > +                               if (next_addr >= end)
> > > +                                       break;
> > > +                               pte += (next_addr - addr) / PAGE_SIZE;
> > > +                               addr = next_addr;
> > > +                       }
> > > +
> > > +                       if (!start_pte) {
> > > +                               start_pte = pte = pte_offset_map_lock(
> > > +                                       mm, pmd, addr, &ptl);
> > > +                               if (!start_pte)
> > > +                                       break;
> > > +                               arch_enter_lazy_mmu_mode();
> > > +                       }
> > > +
> > > +next_folio:
> > >                         pte--;
> > >                         addr -= PAGE_SIZE;
> > >                         continue;
> > > --
> > > 2.33.1
> > >

Thanks
Barry





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux