Re: [patch 036/212] mm, slab: make flush_slab() possible to call with irqs enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 9/3/21 09:03, Vlastimil Babka wrote:
> On 9/3/21 08:22, Mike Galbraith wrote:
>>> > so that when you read that function on its own, it's clear that the
>>> > lock is always held over that critical section - and the issue is that
>>> > perhaps the lock was already taken by the caller.
>>> 
>>> Actually that "already taken" becomes "caller does not need it/can't
> 
> Meant to say "... later in the series becomes ...".
> 
>>> even take the local lock as it's not local" (it's a cpu hot remove
>>> handler on behalf of another, dead cpu).
>>> 
>>> So would it work with something like the following cleanup on top later
>>> after proper testing? (now just compile tested).
>> 
>> To my eyeballs, below duplication of a couple lines of initialization
>> needed by the lockless function is less icky than the double return.
> 
> Yeah, that's better, thanks Mike.

Formal patch below, also added to my git branch:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slub-local-lock-v5r1

----8<----
>From b67952ce67528f3ebeaae58e0eae22a6dbae64b5 Mon Sep 17 00:00:00 2001
From: Vlastimil Babka <vbabka@xxxxxxx>
Date: Fri, 3 Sep 2021 12:59:25 +0200
Subject: [PATCH] mm, slub: remove conditional locking parameter from
 flush_slab()

flush_slab() is called either as part of work scheduled on given live cpu, or
called as a cleanup for another cpu that went offline. In the first case it
needs to hold the cpu_slab->lock local lock when updating the protected
kmem_cache_cpu fields. This is now achieved by a "bool lock" parameter.

To avoid the conditional locking, we can instead lock unconditionally in
flush_slab() for live cpus, and opencode the variant without locking in
__flush_cpu_slab() for the dead cpus.

Reported-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Suggested-by: Mike Galbraith <efault@xxxxxx>
Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
---
 mm/slub.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index df1ac8aff86f..77fe3d6d2065 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2566,15 +2566,13 @@ static inline void unfreeze_partials_cpu(struct kmem_cache *s,
 
 #endif	/* CONFIG_SLUB_CPU_PARTIAL */
 
-static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c,
-			      bool lock)
+static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c)
 {
 	unsigned long flags;
 	void *freelist;
 	struct page *page;
 
-	if (lock)
-		local_lock_irqsave(&s->cpu_slab->lock, flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 
 	freelist = c->freelist;
 	page = c->page;
@@ -2583,8 +2581,7 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c,
 	c->freelist = NULL;
 	c->tid = next_tid(c->tid);
 
-	if (lock)
-		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 
 	if (page)
 		deactivate_slab(s, page, freelist);
@@ -2595,9 +2592,17 @@ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c,
 static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu)
 {
 	struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
+	struct page *page = c->page;
+	void *freelist = c->freelist;
 
-	if (c->page)
-		flush_slab(s, c, false);
+	c->page = NULL;
+	c->freelist = NULL;
+	c->tid = next_tid(c->tid);
+
+	if (page) {
+		deactivate_slab(s, page, freelist);
+		stat(s, CPUSLAB_FLUSH);
+	}
 
 	unfreeze_partials_cpu(s, c);
 }
@@ -2625,7 +2630,7 @@ static void flush_cpu_slab(struct work_struct *w)
 	c = this_cpu_ptr(s->cpu_slab);
 
 	if (c->page)
-		flush_slab(s, c, true);
+		flush_slab(s, c);
 
 	unfreeze_partials(s);
 }
-- 
2.33.0


 




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux