Re: [PATCH] mm/hugetlb: convert dissolve_free_huge_pages() to folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/4/12 0:47, Sidhartha Kumar wrote:
> Allows us to rename dissolve_free_huge_pages() to
> dissolve_free_hugetlb_folio(). Convert one caller to pass in a folio
> directly and use page_folio() to convert the caller in mm/memory-failure.
> 
> Signed-off-by: Sidhartha Kumar <sidhartha.kumar@xxxxxxxxxx>

Thanks for your patch. Some nits below.

> ---
>  include/linux/hugetlb.h |  4 ++--
>  mm/hugetlb.c            | 15 +++++++--------
>  mm/memory-failure.c     |  4 ++--
>  3 files changed, 11 insertions(+), 12 deletions(-)
> 
>  
>  /*
> - * Dissolve a given free hugepage into free buddy pages. This function does
> - * nothing for in-use hugepages and non-hugepages.
> + * Dissolve a given free hugetlb folio into free buddy pages. This function
> + * does nothing for in-use hugepages and non-hugepages.

in-use hugetlb folio and non-hugetlb folio?

>   * This function returns values like below:
>   *
>   *  -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages
> @@ -2390,10 +2390,9 @@ static struct folio *remove_pool_hugetlb_folio(struct hstate *h,
>   *       0:  successfully dissolved free hugepages or the page is not a
>   *           hugepage (considered as already dissolved)
>   */
> -int dissolve_free_huge_page(struct page *page)
> +int dissolve_free_hugetlb_folio(struct folio *folio)
>  {
>  	int rc = -EBUSY;
> -	struct folio *folio = page_folio(page);
>  
>  retry:
>  	/* Not to disrupt normal path by vainly holding hugetlb_lock */
> @@ -2470,13 +2469,13 @@ int dissolve_free_huge_page(struct page *page)
>   * make specified memory blocks removable from the system.
>   * Note that this will dissolve a free gigantic hugepage completely, if any
>   * part of it lies within the given range.
> - * Also note that if dissolve_free_huge_page() returns with an error, all
> + * Also note that if dissolve_free_hugetlb_folio() returns with an error, all
>   * free hugepages that were dissolved before that error are lost.

free hugetlb folio?

>   */
>  int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
>  {
>  	unsigned long pfn;
> -	struct page *page;
> +	struct folio *folio;
>  	int rc = 0;
>  	unsigned int order;
>  	struct hstate *h;
> @@ -2489,8 +2488,8 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
>  		order = min(order, huge_page_order(h));
>  
>  	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order) {
> -		page = pfn_to_page(pfn);
> -		rc = dissolve_free_huge_page(page);
> +		folio = pfn_folio(pfn);
> +		rc = dissolve_free_hugetlb_folio(folio);
>  		if (rc)
>  			break;
>  	}
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index 88359a185c5f9..5a6062b61c44d 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -155,11 +155,11 @@ static int __page_handle_poison(struct page *page)
>  
>  	/*
>  	 * zone_pcp_disable() can't be used here. It will hold pcp_batch_high_lock and
> -	 * dissolve_free_huge_page() might hold cpu_hotplug_lock via static_key_slow_dec()
> +	 * dissolve_free_hugetlb_folio() might hold cpu_hotplug_lock via static_key_slow_dec()
>  	 * when hugetlb vmemmap optimization is enabled. This will break current lock
>  	 * dependency chain and leads to deadlock.
>  	 */
> -	ret = dissolve_free_huge_page(page);
> +	ret = dissolve_free_hugetlb_folio(page_folio(page));
>  	if (!ret) {
>  		drain_all_pages(page_zone(page));
>  		ret = take_page_off_buddy(page);

There is a comment in page_handle_poison referring to dissolve_free_huge_page. It might be better to change it too?

static bool page_handle_poison(struct page *page, bool hugepage_or_freepage, bool release)
{
	if (hugepage_or_freepage) {
		/*
		 * Doing this check for free pages is also fine since *dissolve_free_huge_page*
		 * returns 0 for non-hugetlb pages as well.
		 */
		if (__page_handle_poison(page) <= 0)
			/*
			 * We could fail to take off the target page from buddy
			 * for example due to racy page allocation, but that's
			 * acceptable because soft-offlined page is not broken
			 * and if someone really want to use it, they should
			 * take it.
			 */
			return false;
	}

Thanks.
.

> 





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux