On 7/22/24 4:10 AM, Li Zhijian wrote: > It's expected that no page should be left in pcp_list after calling > zone_pcp_disable() in offline_pages(). Previously, it's observed that > offline_pages() gets stuck [1] due to some pages remaining in pcp_list. > > Cause: > There is a race condition between drain_pages_zone() and __rmqueue_pcplist() > involving the pcp->count variable. See below scenario: > > CPU0 CPU1 > ---------------- --------------- > spin_lock(&pcp->lock); > __rmqueue_pcplist() { > zone_pcp_disable() { > /* list is empty */ > if (list_empty(list)) { > /* add pages to pcp_list */ > alloced = rmqueue_bulk() > mutex_lock(&pcp_batch_high_lock) > ... > __drain_all_pages() { > drain_pages_zone() { > /* read pcp->count, it's 0 here */ > count = READ_ONCE(pcp->count) > /* 0 means nothing to drain */ > /* update pcp->count */ > pcp->count += alloced << order; > ... > ... > spin_unlock(&pcp->lock); > > In this case, after calling zone_pcp_disable() though, there are still some > pages in pcp_list. And these pages in pcp_list are neither movable nor > isolated, offline_pages() gets stuck as a result. > > Solution: > Expand the scope of the pcp->lock to also protect pcp->count in > drain_pages_zone(), to ensure no pages are left in the pcp list after > zone_pcp_disable() > > [1] https://lore.kernel.org/linux-mm/6a07125f-e720-404c-b2f9-e55f3f166e85@xxxxxxxxxxx/ > > Cc: David Hildenbrand <david@xxxxxxxxxx> > Cc: Vlastimil Babka (SUSE) <vbabka@xxxxxxxxxx> > Reported-by: Yao Xingtao <yaoxt.fnst@xxxxxxxxxxx> > Signed-off-by: Li Zhijian <lizhijian@xxxxxxxxxxx> Can we find a breaking commit for Fixes: ? > --- > V2: > - Narrow down the scope of the spin_lock() to limit the draining latency. # Vlastimil and David > - In above scenario, it's sufficient to read pcp->count once with lock held, and it fully fixed > my issue[1] in thounds runs(It happened in more than 5% before). That should be ok indeed, but... > RFC: > https://lore.kernel.org/linux-mm/20240716073929.843277-1-lizhijian@xxxxxxxxxxx/ > --- > mm/page_alloc.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 9ecf99190ea2..5388a35c4e9c 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2323,8 +2323,11 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) > static void drain_pages_zone(unsigned int cpu, struct zone *zone) > { > struct per_cpu_pages *pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu); > - int count = READ_ONCE(pcp->count); > + int count; > > + spin_lock(&pcp->lock); > + count = pcp->count; > + spin_unlock(&pcp->lock); > while (count) { > int to_drain = min(count, pcp->batch << CONFIG_PCP_BATCH_SCALE_MAX); > count -= to_drain; It's wasteful to do a lock/unlock cycle just to read the count. It could rather look something like this: while (true) spin_lock(&pcp->lock); count = pcp->count; ... count -= to_drain; if (to_drain) drain_zone_pages(...) ... spin_unlock(&pcp->lock); if (count) break;