On 3/28/22 17:12, Roman Gushchin wrote:
On Mon, Mar 28, 2022 at 04:46:39PM -0400, Waiman Long wrote:
On 3/28/22 15:12, Roman Gushchin wrote:
On Sun, Mar 27, 2022 at 08:57:15PM -0400, Waiman Long wrote:
On 3/22/22 22:12, Muchun Song wrote:
On Wed, Mar 23, 2022 at 9:55 AM Waiman Long <longman@xxxxxxxxxx> wrote:
On 3/22/22 21:06, Muchun Song wrote:
On Wed, Mar 9, 2022 at 10:40 PM Waiman Long <longman@xxxxxxxxxx> wrote:
Since commit 2c80cd57c743 ("mm/list_lru.c: fix list_lru_count_node()
to be race free"), we are tracking the total number of lru
entries in a list_lru_node in its nr_items field. In the case of
memcg_reparent_list_lru_node(), there is nothing to be done if nr_items
is 0. We don't even need to take the nlru->lock as no new lru entry
could be added by a racing list_lru_add() to the draining src_idx memcg
at this point.
Hi Waiman,
Sorry for the late reply. Quick question: what if there is an inflight
list_lru_add()? How about the following race?
CPU0: CPU1:
list_lru_add()
spin_lock(&nlru->lock)
l = list_lru_from_kmem(memcg)
memcg_reparent_objcgs(memcg)
memcg_reparent_list_lrus(memcg)
memcg_reparent_list_lru()
memcg_reparent_list_lru_node()
if (!READ_ONCE(nlru->nr_items))
// Miss reparenting
return
// Assume 0->1
l->nr_items++
// Assume 0->1
nlru->nr_items++
IIUC, we use nlru->lock to serialise this scenario.
I guess this race is theoretically possible but very unlikely since it
means a very long pause between list_lru_from_kmem() and the increment
of nr_items.
It is more possible in a VM.
How about the following changes to make sure that this race can't happen?
diff --git a/mm/list_lru.c b/mm/list_lru.c
index c669d87001a6..c31a0a8ad4e7 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -395,9 +395,10 @@ static void memcg_reparent_list_lru_node(struct
list_lru *lru, int nid,
struct list_lru_one *src, *dst;
/*
- * If there is no lru entry in this nlru, we can skip it
immediately.
+ * If there is no lru entry in this nlru and the nlru->lock is free,
+ * we can skip it immediately.
*/
- if (!READ_ONCE(nlru->nr_items))
+ if (!READ_ONCE(nlru->nr_items) && !spin_is_locked(&nlru->lock))
I think we also should insert a smp_rmb() between those two loads.
Thinking about this some more, I believe that adding spin_is_locked() check
will be enough for x86. However, that will likely not be enough for arches
with a more relaxed memory semantics. So the safest way to avoid this
possible race is to move the check to within the lock critical section,
though that comes with a slightly higher overhead for the 0 nr_items case. I
will send out a patch to correct that. Thanks for bring this possible race
to my attention.
Yes, I think it's not enough:
CPU0 CPU1
READ_ONCE(&nlru->nr_items) -> 0
spin_lock(&nlru->lock);
nlru->nr_items++;
spin_unlock(&nlru->lock);
&& !spin_is_locked(&nlru->lock) -> 0
I have actually thought of that. I am even thinking about reading nr_items
again after spin_is_locked(). Still for arches with relaxed memory
semantics, when will a memory write by one cpu be propagated to another cpu
can be highly variable. It is very hard to prove that it is completely safe.
x86 has a more strict memory semantics and it is the only architecture that
I have enough confidence that doing the check without taking a lock can be
safe. Perhaps we could use this optimization just for x86 and do it inside
locks for the rests.
Hm, is this such a big problem in the real life? Can you describe the setup?
I'm somewhat resistant to an idea of having arch-specific optimizations here
without a HUGE reason.
I am just throwing this idea out for discussion. It does not mean that I
want to do an arch specific patch unless there is performance data to
indicate a substantial gain in performance in some use cases.
Cheers,
Longman