Re: [PATCH] mm: add maybe_lru_add_drain() that only drains when threshold is exceeded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 18.12.24 17:56, Rik van Riel wrote:
The lru_add_drain() call in zap_page_range_single() always takes some locks,
and will drain the buffers even when there is only a single page pending.

We probably don't need to do that, since we already deal fine with zap_page_range
encountering pages that are still in the buffers of other CPUs.

On an AMD Milan CPU, will-it-scale the tlb_flush2_threads test performance with
36 threads (one for each core) increases from 526k to 730k loops per second.

The overhead in this case was on the lruvec locks, taking the lock to flush
a single page. There may be other spots where this variant could be appropriate.

Signed-off-by: Rik van Riel <riel@xxxxxxxxxxx>
---
  include/linux/swap.h |  1 +
  mm/memory.c          |  2 +-
  mm/swap.c            | 18 ++++++++++++++++++
  mm/swap_state.c      |  2 +-
  4 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index dd5ac833150d..a2f06317bd4b 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -391,6 +391,7 @@ static inline void lru_cache_enable(void)
  }
extern void lru_cache_disable(void);
+extern void maybe_lru_add_drain(void);
  extern void lru_add_drain(void);
  extern void lru_add_drain_cpu(int cpu);
  extern void lru_add_drain_cpu_zone(struct zone *zone);
diff --git a/mm/memory.c b/mm/memory.c
index 2635f7bceab5..1767c65b93ad 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1919,7 +1919,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
  	struct mmu_notifier_range range;
  	struct mmu_gather tlb;
- lru_add_drain();
+	maybe_lru_add_drain();
  	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
  				address, end);
  	hugetlb_zap_begin(vma, &range.start, &range.end);
diff --git a/mm/swap.c b/mm/swap.c
index 9caf6b017cf0..001664a652ff 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -777,6 +777,24 @@ void lru_add_drain(void)
  	mlock_drain_local();
  }
+static bool should_lru_add_drain(void)
+{
+	struct cpu_fbatches *fbatches = this_cpu_ptr(&cpu_fbatches);
+	int pending = folio_batch_count(&fbatches->lru_add);
+	pending += folio_batch_count(&fbatches->lru_deactivate);
+	pending += folio_batch_count(&fbatches->lru_deactivate_file);
+	pending += folio_batch_count(&fbatches->lru_lazyfree);
+
+	/* Don't bother draining unless we have several pages pending. */
+	return pending > SWAP_CLUSTER_MAX;
+}
+
+void maybe_lru_add_drain(void)
+{
+	if (should_lru_add_drain())
+		lru_add_drain();
+}
+
  /*
   * It's called from per-cpu workqueue context in SMP case so
   * lru_add_drain_cpu and invalidate_bh_lrus_cpu should run on
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 3a0cf965f32b..1ae4cd7b041e 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -317,7 +317,7 @@ void free_pages_and_swap_cache(struct encoded_page **pages, int nr)
  	struct folio_batch folios;
  	unsigned int refs[PAGEVEC_SIZE];
- lru_add_drain();
+	maybe_lru_add_drain();

I'm wondering about the reason+effect of this existing call.

Seems to date back to the beginning of git.

Likely it doesn't make sense to have effectively-free pages in the LRU+mlock cache. But then, this only considers the local CPU LRU/mlock caches ... hmmm

So .... do we need this at all? :)

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux