[PATCH] memcg: Avoid stock refill only if stock_lock can't be acquired

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Since commit 01d37228d331 ("memcg: Use trylock to access memcg
stock_lock.") consume_stock() can fail if it can't obtain
memcg_stock.stock_lock. In that case try_charge_memcg() also avoids
refilling or flushing the stock when gfp flags indicate we are in the
context where obtaining the lock could fail.

However consume_stock() can also fail because the stock was depleted, or
belonged to a different memcg. Avoiding the stock refill then reduces
the caching efficiency, as the refill could still succeed with memory
available. This has caused various regressions to be reported by the
kernel test robot.

To fix this, make the decision to avoid stock refill more precise by
making consume_stock() return -EBUSY when it fails to obtain stock_lock,
and using that for the no-refill decision.

Fixes: 01d37228d331 ("memcg: Use trylock to access memcg stock_lock.")
Reported-by: kernel test robot <oliver.sang@xxxxxxxxx>
Closes: https://lore.kernel.org/oe-lkp/202503101254.cfd454df-lkp@xxxxxxxxx

Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
---
 mm/memcontrol.c | 22 ++++++++++++++--------
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 092cab99dec7..a8371a22c7f4 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1772,22 +1772,23 @@ static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
  * stock, and at least @nr_pages are available in that stock.  Failure to
  * service an allocation will refill the stock.
  *
- * returns true if successful, false otherwise.
+ * returns 0 if successful, -EBUSY if lock cannot be acquired, or -ENOMEM
+ * if the memcg does not match or there are not enough pages
  */
-static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages,
+static int consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages,
 			  gfp_t gfp_mask)
 {
 	struct memcg_stock_pcp *stock;
 	unsigned int stock_pages;
 	unsigned long flags;
-	bool ret = false;
+	bool ret = -ENOMEM;
 
 	if (nr_pages > MEMCG_CHARGE_BATCH)
 		return ret;
 
 	if (!localtry_trylock_irqsave(&memcg_stock.stock_lock, flags)) {
 		if (!gfpflags_allow_spinning(gfp_mask))
-			return ret;
+			return -EBUSY;
 		localtry_lock_irqsave(&memcg_stock.stock_lock, flags);
 	}
 
@@ -1795,7 +1796,7 @@ static bool consume_stock(struct mem_cgroup *memcg, unsigned int nr_pages,
 	stock_pages = READ_ONCE(stock->nr_pages);
 	if (memcg == READ_ONCE(stock->cached) && stock_pages >= nr_pages) {
 		WRITE_ONCE(stock->nr_pages, stock_pages - nr_pages);
-		ret = true;
+		ret = 0;
 	}
 
 	localtry_unlock_irqrestore(&memcg_stock.stock_lock, flags);
@@ -2228,13 +2229,18 @@ int try_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp_mask,
 	bool drained = false;
 	bool raised_max_event = false;
 	unsigned long pflags;
+	int consume_ret;
 
 retry:
-	if (consume_stock(memcg, nr_pages, gfp_mask))
+	consume_ret = consume_stock(memcg, nr_pages, gfp_mask);
+	if (!consume_ret)
 		return 0;
 
-	if (!gfpflags_allow_spinning(gfp_mask))
-		/* Avoid the refill and flush of the older stock */
+	/*
+	 * Avoid the refill and flush of the older stock if we failed to acquire
+	 * the stock_lock
+	 */
+	if (consume_ret == -EBUSY)
 		batch = nr_pages;
 
 	if (!do_memsw_account() ||
-- 
2.48.1






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux