Re: [PATCH-mm v3] mm/list_lru: Optimize memcg_reparent_list_lru_node()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/22/22 22:12, Muchun Song wrote:
On Wed, Mar 23, 2022 at 9:55 AM Waiman Long <longman@xxxxxxxxxx> wrote:
On 3/22/22 21:06, Muchun Song wrote:
On Wed, Mar 9, 2022 at 10:40 PM Waiman Long <longman@xxxxxxxxxx> wrote:
Since commit 2c80cd57c743 ("mm/list_lru.c: fix list_lru_count_node()
to be race free"), we are tracking the total number of lru
entries in a list_lru_node in its nr_items field.  In the case of
memcg_reparent_list_lru_node(), there is nothing to be done if nr_items
is 0.  We don't even need to take the nlru->lock as no new lru entry
could be added by a racing list_lru_add() to the draining src_idx memcg
at this point.
Hi Waiman,

Sorry for the late reply.  Quick question: what if there is an inflight
list_lru_add()?  How about the following race?

CPU0:                               CPU1:
list_lru_add()
      spin_lock(&nlru->lock)
      l = list_lru_from_kmem(memcg)
                                      memcg_reparent_objcgs(memcg)
                                      memcg_reparent_list_lrus(memcg)
                                          memcg_reparent_list_lru()
                                              memcg_reparent_list_lru_node()
                                                  if (!READ_ONCE(nlru->nr_items))
                                                      // Miss reparenting
                                                      return
      // Assume 0->1
      l->nr_items++
      // Assume 0->1
      nlru->nr_items++

IIUC, we use nlru->lock to serialise this scenario.
I guess this race is theoretically possible but very unlikely since it
means a very long pause between list_lru_from_kmem() and the increment
of nr_items.
It is more possible in a VM.

How about the following changes to make sure that this race can't happen?

diff --git a/mm/list_lru.c b/mm/list_lru.c
index c669d87001a6..c31a0a8ad4e7 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -395,9 +395,10 @@ static void memcg_reparent_list_lru_node(struct
list_lru *lru, int nid,
          struct list_lru_one *src, *dst;

          /*
-        * If there is no lru entry in this nlru, we can skip it
immediately.
+        * If there is no lru entry in this nlru and the nlru->lock is free,
+        * we can skip it immediately.
           */
-       if (!READ_ONCE(nlru->nr_items))
+       if (!READ_ONCE(nlru->nr_items) && !spin_is_locked(&nlru->lock))
I think we also should insert a smp_rmb() between those two loads.

Thinking about this some more, I believe that adding spin_is_locked() check will be enough for x86. However, that will likely not be enough for arches with a more relaxed memory semantics. So the safest way to avoid this possible race is to move the check to within the lock critical section, though that comes with a slightly higher overhead for the 0 nr_items case. I will send out a patch to correct that. Thanks for bring this possible race to my attention.

Cheers,
Longman





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux