Re: [PATCH v3 3/3] mm/compaction: optimize >0 order folio compaction with free page split.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/2/24 17:15, Zi Yan wrote:
> From: Zi Yan <ziy@xxxxxxxxxx>
> 
> During migration in a memory compaction, free pages are placed in an array
> of page lists based on their order. But the desired free page order (i.e.,
> the order of a source page) might not be always present, thus leading to
> migration failures and premature compaction termination. Split a high
> order free pages when source migration page has a lower order to increase
> migration successful rate.
> 
> Note: merging free pages when a migration fails and a lower order free
> page is returned via compaction_free() is possible, but there is too much
> work. Since the free pages are not buddy pages, it is hard to identify
> these free pages using existing PFN-based page merging algorithm.
> 
> Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
> ---
>  mm/compaction.c | 37 ++++++++++++++++++++++++++++++++++++-
>  1 file changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 58a4e3fb72ec..fa9993c8a389 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1832,9 +1832,43 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>  	struct compact_control *cc = (struct compact_control *)data;
>  	struct folio *dst;
>  	int order = folio_order(src);
> +	bool has_isolated_pages = false;
>  
> +again:
>  	if (!cc->freepages[order].nr_pages) {
> -		isolate_freepages(cc);
> +		int i;
> +
> +		for (i = order + 1; i < NR_PAGE_ORDERS; i++) {

You could probably just start with a loop that finds the start_order (and do
the isolate_freepages() attempt if there's none) and then handle the rest
outside of the loop. No need to separately handle the case where you have
the exact order available?


> +			if (cc->freepages[i].nr_pages) {
> +				struct page *freepage =
> +					list_first_entry(&cc->freepages[i].pages,
> +							 struct page, lru);
> +
> +				int start_order = i;
> +				unsigned long size = 1 << start_order;
> +
> +				list_del(&freepage->lru);
> +				cc->freepages[i].nr_pages--;
> +
> +				while (start_order > order) {

With exact order available this while loop will just be skipped and that's
all the difference to it?

> +					start_order--;
> +					size >>= 1;
> +
> +					list_add(&freepage[size].lru,
> +						&cc->freepages[start_order].pages);
> +					cc->freepages[start_order].nr_pages++;
> +					set_page_private(&freepage[size], start_order);
> +				}
> +				dst = (struct folio *)freepage;
> +				goto done;
> +			}
> +		}
> +		if (!has_isolated_pages) {
> +			isolate_freepages(cc);
> +			has_isolated_pages = true;
> +			goto again;
> +		}
> +
>  		if (!cc->freepages[order].nr_pages)
>  			return NULL;
>  	}
> @@ -1842,6 +1876,7 @@ static struct folio *compaction_alloc(struct folio *src, unsigned long data)
>  	dst = list_first_entry(&cc->freepages[order].pages, struct folio, lru);
>  	cc->freepages[order].nr_pages--;
>  	list_del(&dst->lru);
> +done:
>  	post_alloc_hook(&dst->page, order, __GFP_MOVABLE);
>  	if (order)
>  		prep_compound_page(&dst->page, order);





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux