Re: [PATCH v3 1/3] mm/compaction: enable compacting >0 order folios.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/2/24 17:15, Zi Yan wrote:
> From: Zi Yan <ziy@xxxxxxxxxx>
> 
> migrate_pages() supports >0 order folio migration and during compaction,
> even if compaction_alloc() cannot provide >0 order free pages,
> migrate_pages() can split the source page and try to migrate the base pages
> from the split. It can be a baseline and start point for adding support for
> compacting >0 order folios.
> 
> Suggested-by: Huang Ying <ying.huang@xxxxxxxxx>
> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
> ---
>  mm/compaction.c | 43 +++++++++++++++++++++++++++++++++++--------
>  1 file changed, 35 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 4add68d40e8d..e43e898d2c77 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
>  	return too_many;
>  }
>  
> +/*
> + * 1. if the page order is larger than or equal to target_order (i.e.,
> + * cc->order and when it is not -1 for global compaction), skip it since
> + * target_order already indicates no free page with larger than target_order
> + * exists and later migrating it will most likely fail;
> + *
> + * 2. compacting > pageblock_order pages does not improve memory fragmentation,
> + * skip them;
> + */
> +static bool skip_isolation_on_order(int order, int target_order)
> +{
> +	return (target_order != -1 && order >= target_order) ||
> +		order >= pageblock_order;
> +}
> +
>  /**
>   * isolate_migratepages_block() - isolate all migrate-able pages within
>   *				  a single pageblock
> @@ -1010,7 +1025,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  		/*
>  		 * Regardless of being on LRU, compound pages such as THP and
>  		 * hugetlbfs are not to be compacted unless we are attempting
> -		 * an allocation much larger than the huge page size (eg CMA).
> +		 * an allocation larger than the compound page size.
>  		 * We can potentially save a lot of iterations if we skip them
>  		 * at once. The check is racy, but we can consider only valid
>  		 * values and the only danger is skipping too much.
> @@ -1018,11 +1033,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  		if (PageCompound(page) && !cc->alloc_contig) {
>  			const unsigned int order = compound_order(page);
>  
> -			if (likely(order <= MAX_PAGE_ORDER)) {
> -				low_pfn += (1UL << order) - 1;
> -				nr_scanned += (1UL << order) - 1;
> +			/*
> +			 * Skip based on page order and compaction target order
> +			 * and skip hugetlbfs pages.
> +			 */
> +			if (skip_isolation_on_order(order, cc->order) ||
> +			    PageHuge(page)) {

Hm I'd try to avoid a new PageHuge() test here.

Earlier we have a block that does
                if (PageHuge(page) && cc->alloc_contig) {
			...

think I'd rather rewrite it to handle the PageHuge() case completely and
just make it skip the 1UL << order pages there for !cc->alloc_config. Even
if it means duplicating a bit of the low_pfn and nr_scanned bumping code.

Which reminds me the PageHuge() check there is probably still broken ATM:

https://lore.kernel.org/all/8fa1c95c-4749-33dd-42ba-243e492ab109@xxxxxxx/

Even better reason not to add another one.
If the huge page materialized since the first check, we should bail out when
testing PageLRU later anyway.

> +				if (order <= MAX_PAGE_ORDER) {
> +					low_pfn += (1UL << order) - 1;
> +					nr_scanned += (1UL << order) - 1;
> +				}
> +				goto isolate_fail;
>  			}
> -			goto isolate_fail;
>  		}
>  
>  		/*
> @@ -1165,10 +1187,11 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>  			}
>  
>  			/*
> -			 * folio become large since the non-locked check,
> -			 * and it's on LRU.
> +			 * Check LRU folio order under the lock
>  			 */
> -			if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
> +			if (unlikely(skip_isolation_on_order(folio_order(folio),
> +							     cc->order) &&
> +				     !cc->alloc_contig)) {
>  				low_pfn += folio_nr_pages(folio) - 1;
>  				nr_scanned += folio_nr_pages(folio) - 1;
>  				folio_set_lru(folio);
> @@ -1786,6 +1809,10 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>  	struct compact_control *cc = (struct compact_control *)data;
>  	struct folio *dst;
>  
> +	/* this makes migrate_pages() split the source page and retry */
> +	if (folio_test_large(src) > 0)
> +		return NULL;
> +
>  	if (list_empty(&cc->freepages)) {
>  		isolate_freepages(cc);
>  





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux