Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11.04.24 15:51, Ryan Roberts wrote:
On 11/04/2024 13:23, Lance Yang wrote:
On Thu, Apr 11, 2024 at 7:27 PM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:

On 11/04/2024 12:20, David Hildenbrand wrote:
On 11.04.24 13:11, Ryan Roberts wrote:
On 08/04/2024 05:24, Lance Yang wrote:
This patch optimizes lazyfreeing with PTE-mapped mTHP[1]
(Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio
splitting if the large folio is fully mapped within the target range.

If a large folio is locked or shared, or if we fail to split it, we just
leave it in place and advance to the next PTE in the range. But note that
the behavior is changed; previously, any failure of this sort would cause
the entire operation to give up. As large folios become more common,
sticking to the old way could result in wasted opportunities.

On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios of
the same size results in the following runtimes for madvise(MADV_FREE) in
seconds (shorter is better):

Folio Size |   Old    |   New    | Change
------------------------------------------
        4KiB | 0.590251 | 0.590259 |    0%
       16KiB | 2.990447 | 0.185655 |  -94%
       32KiB | 2.547831 | 0.104870 |  -95%
       64KiB | 2.457796 | 0.052812 |  -97%
      128KiB | 2.281034 | 0.032777 |  -99%
      256KiB | 2.230387 | 0.017496 |  -99%
      512KiB | 2.189106 | 0.010781 |  -99%
     1024KiB | 2.183949 | 0.007753 |  -99%
     2048KiB | 0.002799 | 0.002804 |    0%

[1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@xxxxxxx
[2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@xxxxxxxxxx

Signed-off-by: Lance Yang <ioworker0@xxxxxxxxx>
---
   include/linux/pgtable.h |  34 +++++++++
   mm/internal.h           |  12 +++-
   mm/madvise.c            | 149 ++++++++++++++++++++++------------------
   mm/memory.c             |   4 +-
   4 files changed, 129 insertions(+), 70 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 0f4b2faa1d71..4dd442787420 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_struct
*mm,
   }
   #endif
   +#ifndef mkold_clean_ptes
+/**
+ * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same folio
+ *        as old and clean.
+ * @mm: Address space the pages are mapped into.
+ * @addr: Address the first page is mapped at.
+ * @ptep: Page table pointer for the first entry.
+ * @nr: Number of entries to mark old and clean.
+ *
+ * May be overridden by the architecture; otherwise, implemented by
+ * get_and_clear/modify/set for each pte in the range.
+ *
+ * Note that PTE bits in the PTE range besides the PFN can differ. For example,
+ * some PTEs might be write-protected.
+ *
+ * Context: The caller holds the page table lock.  The PTEs map consecutive
+ * pages that belong to the same folio.  The PTEs are all in the same PMD.
+ */
+static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned long addr,
+                    pte_t *ptep, unsigned int nr)


Thanks for the suggestions, Ryan, David!

Just thinking out loud, I wonder if it would be cleaner to convert mkold_ptes()
(which I added as part of swap-out) to something like:

Yeah, this is definitely cleaner than before.


clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr,
                pte_t *ptep, unsigned int nr,
                bool clear_young, bool clear_dirty);

Then we can use the same function for both use cases and also have the ability
to only clear dirty in future if we ever need it. The other advantage is that we
only need to plumb a single function down the arm64 arch code. As it currently
stands, those 2 functions would be duplicating most of their code.

Agreed. It's indeed a good idea to use a single function for both use cases.


Yes. Maybe better use proper __bitwise flags, the compiler should be smart
enough to optimize either way.

Nice. I'll use the __bitwise flags as the input.


Agreed. I was also thinking perhaps it makes sense to start using output bitwise
flags for folio_pte_batch() since this patch set takes us up to 3 optional bool
pointers for different things. Might be cleaner to have input flags to tell it
what we care about and output flags to highlight those things. I guess the
compiler should be able to optimize in the same way.


Should I start using output bitwise flags for folio_pte_batch() in
this patch set?

I don't think its crucial (yet). I'd leave it as you have done it for now,
unless David shouts.

Let's do that separately, and investigate if the compiler actually is smart enough ... :)

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux