Re: [PATCH] mm, hwpoison, hugetlb: Free hwpoison huge page to list tail and dissolve hwpoison huge page first

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/02/22 06:07, luofei wrote:
> If free hwpoison huge page to the end of hugepage_freelists, the
> loop can exit directly when the hwpoison huge page is traversed,
> which can effectively reduce the repeated traversal of the hwpoison
> huge page. Meanwhile, when free the free huge pages to lower level
> allocators, if hwpoison ones are released first, this can improve
> the effecvive utilization rate of huge page.

In general, I think this is a good idea.  Although, it seems that with
recent changes to hugetlb poisioning code we are even less likely to
have a poisioned page on hugetlb free lists.

Adding Naoya and Miaohe as they have been looking at page poison of hugetlb
pages recently.

> Signed-off-by: luofei <luofei@xxxxxxxxxxxx>
> ---
>  mm/hugetlb.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 28516881a1b2..ca72220eedd9 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1116,7 +1116,10 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
>  	lockdep_assert_held(&hugetlb_lock);
>  	VM_BUG_ON_PAGE(page_count(page), page);
>  
> -	list_move(&page->lru, &h->hugepage_freelists[nid]);
> +	if (unlikely(PageHWPoison(page)))
> +		list_move_tail(&page->lru, &h->hugepage_freelists[nid]);
> +	else
> +		list_move(&page->lru, &h->hugepage_freelists[nid]);
>  	h->free_huge_pages++;
>  	h->free_huge_pages_node[nid]++;
>  	SetHPageFreed(page);
> @@ -1133,7 +1136,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
>  			continue;
>  
>  		if (PageHWPoison(page))
> -			continue;
> +			break;

IIRC, it is 'possible' to unpoison a page via the debug/testing interfaces.
If so, then we could end up with free unpoisioned page(s) at the end of
the list that would never be used because we quit when encountering a
poisioned page.

Naoya and Miaohe would know for sure.

Same possible issue in demote_pool_huge_page().
-- 
Mike Kravetz

>  
>  		list_move(&page->lru, &h->hugepage_activelist);
>  		set_page_refcounted(page);
> @@ -2045,7 +2048,7 @@ static struct page *remove_pool_huge_page(struct hstate *h,
>  		 */
>  		if ((!acct_surplus || h->surplus_huge_pages_node[node]) &&
>  		    !list_empty(&h->hugepage_freelists[node])) {
> -			page = list_entry(h->hugepage_freelists[node].next,
> +			page = list_entry(h->hugepage_freelists[node].prev,
>  					  struct page, lru);
>  			remove_hugetlb_page(h, page, acct_surplus);
>  			break;
> @@ -3210,7 +3213,7 @@ static void try_to_free_low(struct hstate *h, unsigned long count,
>  	for_each_node_mask(i, *nodes_allowed) {
>  		struct page *page, *next;
>  		struct list_head *freel = &h->hugepage_freelists[i];
> -		list_for_each_entry_safe(page, next, freel, lru) {
> +		list_for_each_entry_safe_reverse(page, next, freel, lru) {
>  			if (count >= h->nr_huge_pages)
>  				goto out;
>  			if (PageHighMem(page))
> @@ -3494,7 +3497,7 @@ static int demote_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed)
>  	for_each_node_mask_to_free(h, nr_nodes, node, nodes_allowed) {
>  		list_for_each_entry(page, &h->hugepage_freelists[node], lru) {
>  			if (PageHWPoison(page))
> -				continue;
> +				break;
>  
>  			return demote_free_huge_page(h, page);
>  		}
> -- 
> 2.27.0
> 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux