[PATCH] mm/slub: Make __ksize() faster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



this is part of a larger in-progress patch series to add tracking of
amount of memory stranded via RCU.

I want to base this on willy's series that rearranges folio batch
freeing - that's got __folio_put() cleanups I want. Once he's got that
up in a git tree I'll do that and finish the rest. 

-- >8 --

with slab gone, we now have a free u32 in struct slab.

This steals it to make __ksize() faster; it's now a single dependent
load, instead of two. This is going to be important for tracking the
amount of memory stranded by RCU, which we want to be able to do if
we're going to be freeing all pagecache folios (and perhaps all folios)
via RCU.

Signed-off-by: Kent Overstreet <kent.overstreet@xxxxxxxxx>
Cc: linux-mm@xxxxxxxxx
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Christoph Lameter <cl@xxxxxxxxx>
Cc: Pekka Enberg <penberg@xxxxxxxxxx>
Cc: David Rientjes <rientjes@xxxxxxxxxx>
Cc: Roman Gushchin <roman.gushchin@xxxxxxxxx>
Cc: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx>
Cc: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
---
 mm/slab.h        | 2 +-
 mm/slab_common.c | 9 ++++-----
 mm/slub.c        | 1 +
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index 54deeb0428c6..64f06431cc97 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -84,7 +84,7 @@ struct slab {
 		};
 		struct rcu_head rcu_head;
 	};
-	unsigned int __unused;
+	unsigned int object_size;
 
 	atomic_t __page_refcount;
 #ifdef CONFIG_MEMCG
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 6ec0f6543f34..f209b8cf4965 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -963,13 +963,12 @@ size_t __ksize(const void *object)
 		if (WARN_ON(object != folio_address(folio)))
 			return 0;
 		return folio_size(folio);
-	}
-
+	} else {
 #ifdef CONFIG_SLUB_DEBUG
-	skip_orig_size_check(folio_slab(folio)->slab_cache, object);
+		skip_orig_size_check(folio_slab(folio)->slab_cache, object);
 #endif
-
-	return slab_ksize(folio_slab(folio)->slab_cache);
+		return folio_slab(folio)->object_size;
+	}
 }
 
 /**
diff --git a/mm/slub.c b/mm/slub.c
index 2ef88bbf56a3..37fe5774c110 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2366,6 +2366,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	}
 
 	slab->objects = oo_objects(oo);
+	slab->object_size = slab_ksize(slab->slab_cache);
 	slab->inuse = 0;
 	slab->frozen = 0;
 
-- 
2.43.0





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux