Patch "mm/page_alloc: fix pcp->count race between drain_pages_zone() vs __rmqueue_pcplist()" has been added to the 6.6-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    mm/page_alloc: fix pcp->count race between drain_pages_zone() vs __rmqueue_pcplist()

to the 6.6-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     mm-page_alloc-fix-pcp-count-race-between-drain_pages.patch
and it can be found in the queue-6.6 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 478b0f5dd0b5cb386eb827e98b0e534ec2313ade
Author: Li Zhijian <lizhijian@xxxxxxxxxxx>
Date:   Tue Jul 23 14:44:28 2024 +0800

    mm/page_alloc: fix pcp->count race between drain_pages_zone() vs __rmqueue_pcplist()
    
    [ Upstream commit 66eca1021a42856d6af2a9802c99e160278aed91 ]
    
    It's expected that no page should be left in pcp_list after calling
    zone_pcp_disable() in offline_pages().  Previously, it's observed that
    offline_pages() gets stuck [1] due to some pages remaining in pcp_list.
    
    Cause:
    There is a race condition between drain_pages_zone() and __rmqueue_pcplist()
    involving the pcp->count variable. See below scenario:
    
             CPU0                              CPU1
        ----------------                    ---------------
                                          spin_lock(&pcp->lock);
                                          __rmqueue_pcplist() {
    zone_pcp_disable() {
                                            /* list is empty */
                                            if (list_empty(list)) {
                                              /* add pages to pcp_list */
                                              alloced = rmqueue_bulk()
      mutex_lock(&pcp_batch_high_lock)
      ...
      __drain_all_pages() {
        drain_pages_zone() {
          /* read pcp->count, it's 0 here */
          count = READ_ONCE(pcp->count)
          /* 0 means nothing to drain */
                                              /* update pcp->count */
                                              pcp->count += alloced << order;
          ...
                                          ...
                                          spin_unlock(&pcp->lock);
    
    In this case, after calling zone_pcp_disable() though, there are still some
    pages in pcp_list. And these pages in pcp_list are neither movable nor
    isolated, offline_pages() gets stuck as a result.
    
    Solution:
    Expand the scope of the pcp->lock to also protect pcp->count in
    drain_pages_zone(), to ensure no pages are left in the pcp list after
    zone_pcp_disable()
    
    [1] https://lore.kernel.org/linux-mm/6a07125f-e720-404c-b2f9-e55f3f166e85@xxxxxxxxxxx/
    
    Link: https://lkml.kernel.org/r/20240723064428.1179519-1-lizhijian@xxxxxxxxxxx
    Fixes: 4b23a68f9536 ("mm/page_alloc: protect PCP lists with a spinlock")
    Signed-off-by: Li Zhijian <lizhijian@xxxxxxxxxxx>
    Reported-by: Yao Xingtao <yaoxt.fnst@xxxxxxxxxxx>
    Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx>
    Cc: David Hildenbrand <david@xxxxxxxxxx>
    Cc: <stable@xxxxxxxxxxxxxxx>
    Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2c40cf4f1eb2d..39bdbfb5313fb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2186,16 +2186,20 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp)
 static void drain_pages_zone(unsigned int cpu, struct zone *zone)
 {
 	struct per_cpu_pages *pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
-	int count = READ_ONCE(pcp->count);
-
-	while (count) {
-		int to_drain = min(count, pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX);
-		count -= to_drain;
+	int count;
 
+	do {
 		spin_lock(&pcp->lock);
-		free_pcppages_bulk(zone, to_drain, pcp, 0);
+		count = pcp->count;
+		if (count) {
+			int to_drain = min(count,
+				pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX);
+
+			free_pcppages_bulk(zone, to_drain, pcp, 0);
+			count -= to_drain;
+		}
 		spin_unlock(&pcp->lock);
-	}
+	} while (count);
 }
 
 /*




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux