[PATCH 3/3 v5] mm/vmalloc: Cache the vmalloc memory info

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Ingo Molnar <mingo@xxxxxxxxxx> wrote:

> One more detail: I just realized that with the read barriers, the READ_ONCE() 
> accesses are not needed anymore - the barriers and the control dependencies are 
> enough.
> 
> This will further simplify the code.

I.e. something like the updated patch below. (We still need the WRITE_ONCE() for 
vmap_info_gen update.)

Thanks,

	Ingo

========================>
>From 46a0507e0a395a7bc2fe4b46a4766e7457ac0140 Mon Sep 17 00:00:00 2001
From: Ingo Molnar <mingo@xxxxxxxxxx>
Date: Sat, 22 Aug 2015 12:28:01 +0200
Subject: [PATCH] mm/vmalloc: Cache the vmalloc memory info

Linus reported that for scripting-intense workloads such as the
Git build, glibc's qsort will read /proc/meminfo for every process
created (by way of get_phys_pages()), which causes the Git build
to generate a surprising amount of kernel overhead.

A fair chunk of the overhead is due to get_vmalloc_info() - which
walks a potentially long list to do its statistics.

Modify Linus's jiffies based patch to use generation counters
to cache the vmalloc info: vmap_unlock() increases the generation
counter, and the get_vmalloc_info() reads it and compares it
against a cached generation counter.

Also use a seqlock to make sure we always print a consistent
set of vmalloc statistics.

Reported-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxx>
Cc: linux-mm@xxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
 mm/vmalloc.c | 82 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 79 insertions(+), 3 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 605138083880..2f8d9660e007 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -276,7 +276,21 @@ EXPORT_SYMBOL(vmalloc_to_pfn);
 #define VM_LAZY_FREEING	0x02
 #define VM_VM_AREA	0x04
 
-static DEFINE_SPINLOCK(vmap_area_lock);
+static __cacheline_aligned_in_smp DEFINE_SPINLOCK(vmap_area_lock);
+
+#ifdef CONFIG_PROC_FS
+/*
+ * A seqlock and two generation counters for a simple cache of the
+ * vmalloc allocation statistics info printed in /proc/meminfo.
+ *
+ * ( The assumption of the optimization is that it's read frequently, but
+ *   modified infrequently. )
+ */
+static DEFINE_SPINLOCK(vmap_info_lock);
+static int vmap_info_gen = 1;
+static int vmap_info_cache_gen;
+static struct vmalloc_info vmap_info_cache;
+#endif
 
 static inline void vmap_lock(void)
 {
@@ -285,6 +299,9 @@ static inline void vmap_lock(void)
 
 static inline void vmap_unlock(void)
 {
+#ifdef CONFIG_PROC_FS
+	WRITE_ONCE(vmap_info_gen, vmap_info_gen+1);
+#endif
 	spin_unlock(&vmap_area_lock);
 }
 
@@ -2699,7 +2716,7 @@ static int __init proc_vmalloc_init(void)
 }
 module_init(proc_vmalloc_init);
 
-void get_vmalloc_info(struct vmalloc_info *vmi)
+static void calc_vmalloc_info(struct vmalloc_info *vmi)
 {
 	struct vmap_area *va;
 	unsigned long free_area_size;
@@ -2746,5 +2763,64 @@ void get_vmalloc_info(struct vmalloc_info *vmi)
 out:
 	rcu_read_unlock();
 }
-#endif
 
+/*
+ * Return a consistent snapshot of the current vmalloc allocation
+ * statistics, for /proc/meminfo:
+ */
+void get_vmalloc_info(struct vmalloc_info *vmi)
+{
+	int gen = vmap_info_gen;
+
+	/*
+	 * If the generation counter of the cache matches that of
+	 * the vmalloc generation counter then return the cache:
+	 */
+	if (vmap_info_cache_gen == gen) {
+		int gen_after;
+
+		/*
+		 * The two read barriers make sure that we read
+		 * 'gen', 'vmap_info_cache' and 'gen_after' in
+		 * precisely that order:
+		 */
+		smp_rmb();
+		*vmi = vmap_info_cache;
+
+		smp_rmb();
+		gen_after = vmap_info_gen;
+
+		/* The cache is still valid: */
+		if (gen == gen_after)
+			return;
+
+		/* Ok, the cache got invalidated just now, regenerate it */
+		gen = gen_after;
+	}
+
+	/* Make sure 'gen' is read before the vmalloc info */
+	smp_rmb();
+
+	calc_vmalloc_info(vmi);
+
+	/*
+	 * All updates to vmap_info_cache_gen go through this spinlock,
+	 * so when the cache got invalidated, we'll only mark it valid
+	 * again if we first fully write the new vmap_info_cache.
+	 *
+	 * This ensures that partial results won't be used.
+	 */
+	spin_lock(&vmap_info_lock);
+	if (gen-vmap_info_cache_gen > 0) {
+		vmap_info_cache = *vmi;
+		/*
+		 * Make sure the new cached data is visible before
+		 * the generation counter update:
+		 */
+		smp_wmb();
+		vmap_info_cache_gen = gen;
+	}
+	spin_unlock(&vmap_info_lock);
+}
+
+#endif /* CONFIG_PROC_FS */

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]