Re: [PATCH 1/3] mm: page_alloc: don't steal single pages from biggest buddy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 23, 2025 at 07:08:24PM -0500, Johannes Weiner wrote:
> The fallback code searches for the biggest buddy first in an attempt
> to steal the whole block and encourage type grouping down the line.
> 
> The approach used to be this:
> 
> - Non-movable requests will split the largest buddy and steal the
>   remainder. This splits up contiguity, but it allows subsequent
>   requests of this type to fall back into adjacent space.
> 
> - Movable requests go and look for the smallest buddy instead. The
>   thinking is that movable requests can be compacted, so grouping is
>   less important than retaining contiguity.
> 
> c0cd6f557b90 ("mm: page_alloc: fix freelist movement during block
> conversion") enforces freelist type hygiene, which restricts stealing
> to either claiming the whole block or just taking the requested chunk;
> no additional pages or buddy remainders can be stolen any more.
> 
> The patch mishandled when to switch to finding the smallest buddy in
> that new reality. As a result, it may steal the exact request size,
> but from the biggest buddy. This causes fracturing for no good reason.
> 
> Fix this by committing to the new behavior: either steal the whole
> block, or fall back to the smallest buddy.
> 
> Remove single-page stealing from steal_suitable_fallback(). Rename it
> to try_to_steal_block() to make the intentions clear. If this fails,
> always fall back to the smallest buddy.

Nit - I think the try_to_steal_block() changes could be a separate
patch, the history might be easier to understand if it went:

[1/N] mm: page_alloc: don't steal single pages from biggest buddy
[2/N] mm: page_alloc: drop unused logic in steal_suitable_fallback()

(But not a big deal, it's not that hard to follow as-is).

>  static __always_inline struct page *
>  __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
> @@ -2291,45 +2289,35 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype,
>  		if (fallback_mt == -1)
>  			continue;
>  
> -		/*
> -		 * We cannot steal all free pages from the pageblock and the
> -		 * requested migratetype is movable. In that case it's better to
> -		 * steal and split the smallest available page instead of the
> -		 * largest available page, because even if the next movable
> -		 * allocation falls back into a different pageblock than this
> -		 * one, it won't cause permanent fragmentation.
> -		 */
> -		if (!can_steal && start_migratetype == MIGRATE_MOVABLE
> -					&& current_order > order)
> -			goto find_smallest;
> +		if (!can_steal)
> +			break;
>  
> -		goto do_steal;
> +		page = get_page_from_free_area(area, fallback_mt);
> +		page = try_to_steal_block(zone, page, current_order, order,
> +					  start_migratetype, alloc_flags);
> +		if (page)
> +			goto got_one;
>  	}
>  
> -	return NULL;
> +	if (alloc_flags & ALLOC_NOFRAGMENT)
> +		return NULL;

Is this a separate change? Is it a bug that we currently allow
stealing a from a fallback type when ALLOC_NOFRAGMENT? (I wonder if
the second loop was supposed to start from min_order).

>  
> -find_smallest:
> +	/* No luck stealing blocks. Find the smallest fallback page */
>  	for (current_order = order; current_order < NR_PAGE_ORDERS; current_order++) {
>  		area = &(zone->free_area[current_order]);
>  		fallback_mt = find_suitable_fallback(area, current_order,
>  				start_migratetype, false, &can_steal);
> -		if (fallback_mt != -1)
> -			break;
> -	}
> -
> -	/*
> -	 * This should not happen - we already found a suitable fallback
> -	 * when looking for the largest page.
> -	 */
> -	VM_BUG_ON(current_order > MAX_PAGE_ORDER);
> +		if (fallback_mt == -1)
> +			continue;
>  
> -do_steal:
> -	page = get_page_from_free_area(area, fallback_mt);
> +		page = get_page_from_free_area(area, fallback_mt);
> +		page_del_and_expand(zone, page, order, current_order, fallback_mt);
> +		goto got_one;
> +	}
>  
> -	/* take off list, maybe claim block, expand remainder */
> -	page = steal_suitable_fallback(zone, page, current_order, order,
> -				       start_migratetype, alloc_flags, can_steal);
> +	return NULL;
>  
> +got_one:
>  	trace_mm_page_alloc_extfrag(page, order, current_order,
>  		start_migratetype, fallback_mt);




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux