Re: [merged mm-stable] mm-fix-a-potential-infinite-loop-in-start_isolate_page_range.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/26/22 23:03, Andrew Morton wrote:
> The quilt patch titled
>      Subject: mm: fix a potential infinite loop in start_isolate_page_range()
> has been removed from the -mm tree.  Its filename was
>      mm-fix-a-potential-infinite-loop-in-start_isolate_page_range.patch
> 
> This patch was dropped because it was merged into the mm-stable branch
> of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Was it with the followup fix?
https://lore.kernel.org/all/8C74B0BA-3C69-46EE-8C14-0CE6787D0B65@xxxxxxxxxx/
The patch content below suggest it wasn't, but maybe it's just how the
mails are generated.

> ------------------------------------------------------
> From: Zi Yan <ziy@xxxxxxxxxx>
> Subject: mm: fix a potential infinite loop in start_isolate_page_range()
> Date: Tue, 24 May 2022 15:47:56 -0400
> 
> In isolate_single_pageblock() called by start_isolate_page_range(), there
> are some pageblock isolation issues causing a potential infinite loop when
> isolating a page range.  This is reported by Qian Cai.
> 
> 1. the pageblock was isolated by just changing pageblock migratetype
>    without checking unmovable pages. Calling set_migratetype_isolate() to
>    isolate pageblock properly.
> 2. an off-by-one error caused migrating pages unnecessarily, since the page
>    is not crossing pageblock boundary.
> 3. migrating a compound page across pageblock boundary then splitting the
>    free page later has a small race window that the free page might be
>    allocated again, so that the code will try again, causing an potential
>    infinite loop. Temporarily set the to-be-migrated page's pageblock to
>    MIGRATE_ISOLATE to prevent that and bail out early if no free page is
>    found after page migration.
> 
> An additional fix to split_free_page() aims to avoid crashing in
> __free_one_page().  When the free page is split at the specified
> split_pfn_offset, free_page_order should check both the first bit of
> free_page_pfn and the last bit of split_pfn_offset and use the smaller
> one.  For example, if free_page_pfn=0x10000, split_pfn_offset=0xc000,
> free_page_order should first be 0x8000 then 0x4000, instead of 0x4000 then
> 0x8000, which the original algorithm did.
> 
> [akpm@xxxxxxxxxxxxxxxxxxxx: suppress min() warning]
> Link: https://lkml.kernel.org/r/20220524194756.1698351-1-zi.yan@xxxxxxxx
> Fixes: b2c9e2fbba3253 ("mm: make alloc_contig_range work at pageblock granularity")
> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
> Reported-by: Qian Cai <quic_qiancai@xxxxxxxxxxx>
> Cc: Christophe Leroy <christophe.leroy@xxxxxxxxxx>
> Cc: David Hildenbrand <david@xxxxxxxxxx>
> Cc: Eric Ren <renzhengeek@xxxxxxxxx>
> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> Cc: Mike Rapoport <rppt@xxxxxxxxxxxxx>
> Cc: Minchan Kim <minchan@xxxxxxxxxx>
> Cc: Oscar Salvador <osalvador@xxxxxxx>
> Cc: Vlastimil Babka <vbabka@xxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
> 
>  mm/page_alloc.c     |    7 ++++-
>  mm/page_isolation.c |   52 ++++++++++++++++++++++++++++++++----------
>  2 files changed, 46 insertions(+), 13 deletions(-)
> 
> --- a/mm/page_alloc.c~mm-fix-a-potential-infinite-loop-in-start_isolate_page_range
> +++ a/mm/page_alloc.c
> @@ -1114,13 +1114,18 @@ void split_free_page(struct page *free_p
>  	unsigned long flags;
>  	int free_page_order;
>  
> +	if (split_pfn_offset == 0)
> +		return;
> +
>  	spin_lock_irqsave(&zone->lock, flags);
>  	del_page_from_free_list(free_page, zone, order);
>  	for (pfn = free_page_pfn;
>  	     pfn < free_page_pfn + (1UL << order);) {
>  		int mt = get_pfnblock_migratetype(pfn_to_page(pfn), pfn);
>  
> -		free_page_order = ffs(split_pfn_offset) - 1;
> +		free_page_order = min_t(int,
> +					pfn ? __ffs(pfn) : order,
> +					__fls(split_pfn_offset));
>  		__free_one_page(pfn_to_page(pfn), pfn, zone, free_page_order,
>  				mt, FPI_NONE);
>  		pfn += 1UL << free_page_order;
> --- a/mm/page_isolation.c~mm-fix-a-potential-infinite-loop-in-start_isolate_page_range
> +++ a/mm/page_isolation.c
> @@ -283,6 +283,7 @@ __first_valid_page(unsigned long pfn, un
>   * isolate_single_pageblock() -- tries to isolate a pageblock that might be
>   * within a free or in-use page.
>   * @boundary_pfn:		pageblock-aligned pfn that a page might cross
> + * @flags:			isolation flags
>   * @gfp_flags:			GFP flags used for migrating pages
>   * @isolate_before:	isolate the pageblock before the boundary_pfn
>   *
> @@ -298,14 +299,15 @@ __first_valid_page(unsigned long pfn, un
>   * either. The function handles this by splitting the free page or migrating
>   * the in-use page then splitting the free page.
>   */
> -static int isolate_single_pageblock(unsigned long boundary_pfn, gfp_t gfp_flags,
> -			bool isolate_before)
> +static int isolate_single_pageblock(unsigned long boundary_pfn, int flags,
> +			gfp_t gfp_flags, bool isolate_before)
>  {
>  	unsigned char saved_mt;
>  	unsigned long start_pfn;
>  	unsigned long isolate_pageblock;
>  	unsigned long pfn;
>  	struct zone *zone;
> +	int ret;
>  
>  	VM_BUG_ON(!IS_ALIGNED(boundary_pfn, pageblock_nr_pages));
>  
> @@ -325,7 +327,11 @@ static int isolate_single_pageblock(unsi
>  				      zone->zone_start_pfn);
>  
>  	saved_mt = get_pageblock_migratetype(pfn_to_page(isolate_pageblock));
> -	set_pageblock_migratetype(pfn_to_page(isolate_pageblock), MIGRATE_ISOLATE);
> +	ret = set_migratetype_isolate(pfn_to_page(isolate_pageblock), saved_mt, flags,
> +			isolate_pageblock, isolate_pageblock + pageblock_nr_pages);
> +
> +	if (ret)
> +		return ret;
>  
>  	/*
>  	 * Bail out early when the to-be-isolated pageblock does not form
> @@ -374,7 +380,7 @@ static int isolate_single_pageblock(unsi
>  			struct page *head = compound_head(page);
>  			unsigned long head_pfn = page_to_pfn(head);
>  
> -			if (head_pfn + nr_pages < boundary_pfn) {
> +			if (head_pfn + nr_pages <= boundary_pfn) {
>  				pfn = head_pfn + nr_pages;
>  				continue;
>  			}
> @@ -386,7 +392,8 @@ static int isolate_single_pageblock(unsi
>  			if (PageHuge(page) || PageLRU(page) || __PageMovable(page)) {
>  				int order;
>  				unsigned long outer_pfn;
> -				int ret;
> +				int page_mt = get_pageblock_migratetype(page);
> +				bool isolate_page = !is_migrate_isolate_page(page);
>  				struct compact_control cc = {
>  					.nr_migratepages = 0,
>  					.order = -1,
> @@ -399,9 +406,31 @@ static int isolate_single_pageblock(unsi
>  				};
>  				INIT_LIST_HEAD(&cc.migratepages);
>  
> +				/*
> +				 * XXX: mark the page as MIGRATE_ISOLATE so that
> +				 * no one else can grab the freed page after migration.
> +				 * Ideally, the page should be freed as two separate
> +				 * pages to be added into separate migratetype free
> +				 * lists.
> +				 */
> +				if (isolate_page) {
> +					ret = set_migratetype_isolate(page, page_mt,
> +						flags, head_pfn, head_pfn + nr_pages);
> +					if (ret)
> +						goto failed;
> +				}
> +
>  				ret = __alloc_contig_migrate_range(&cc, head_pfn,
>  							head_pfn + nr_pages);
>  
> +				/*
> +				 * restore the page's migratetype so that it can
> +				 * be split into separate migratetype free lists
> +				 * later.
> +				 */
> +				if (isolate_page)
> +					unset_migratetype_isolate(page, page_mt);
> +
>  				if (ret)
>  					goto failed;
>  				/*
> @@ -417,10 +446,9 @@ static int isolate_single_pageblock(unsi
>  				order = 0;
>  				outer_pfn = pfn;
>  				while (!PageBuddy(pfn_to_page(outer_pfn))) {
> -					if (++order >= MAX_ORDER) {
> -						outer_pfn = pfn;
> -						break;
> -					}
> +					/* stop if we cannot find the free page */
> +					if (++order >= MAX_ORDER)
> +						goto failed;
>  					outer_pfn &= ~0UL << order;
>  				}
>  				pfn = outer_pfn;
> @@ -435,7 +463,7 @@ static int isolate_single_pageblock(unsi
>  	return 0;
>  failed:
>  	/* restore the original migratetype */
> -	set_pageblock_migratetype(pfn_to_page(isolate_pageblock), saved_mt);
> +	unset_migratetype_isolate(pfn_to_page(isolate_pageblock), saved_mt);
>  	return -EBUSY;
>  }
>  
> @@ -496,12 +524,12 @@ int start_isolate_page_range(unsigned lo
>  	int ret;
>  
>  	/* isolate [isolate_start, isolate_start + pageblock_nr_pages) pageblock */
> -	ret = isolate_single_pageblock(isolate_start, gfp_flags, false);
> +	ret = isolate_single_pageblock(isolate_start, flags, gfp_flags, false);
>  	if (ret)
>  		return ret;
>  
>  	/* isolate [isolate_end - pageblock_nr_pages, isolate_end) pageblock */
> -	ret = isolate_single_pageblock(isolate_end, gfp_flags, true);
> +	ret = isolate_single_pageblock(isolate_end, flags, gfp_flags, true);
>  	if (ret) {
>  		unset_migratetype_isolate(pfn_to_page(isolate_start), migratetype);
>  		return ret;
> _
> 
> Patches currently in -mm which might be from ziy@xxxxxxxxxx are
> 
> 




[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux