Re: [PATCH] mm/page_alloc: avoid high-order page allocation warn with __GFP_NOFAIL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 06, 2023 at 08:51:40AM +0100, Michal Hocko wrote:
> [Cc couple of more people recently involved with vmalloc code]
> 
> On Sun 05-03-23 13:30:35, Gao Xiang wrote:
> > My knowledge of this is somewhat limited, however, since vmalloc already
> > supported __GFP_NOFAIL in commit 9376130c390a ("mm/vmalloc: add
> > support for __GFP_NOFAIL").  __GFP_NOFAIL could trigger the following
> > stack and allocate high-order pages when CONFIG_HAVE_ARCH_HUGE_VMALLOC
> > is enabled:
> > 
> >  __alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5549
> >  alloc_pages+0x1aa/0x270 mm/mempolicy.c:2286
> >  vm_area_alloc_pages mm/vmalloc.c:2989 [inline]
> >
> >  __vmalloc_area_node mm/vmalloc.c:3057 [inline]
> >  __vmalloc_node_range+0x978/0x13c0 mm/vmalloc.c:3227
> >  kvmalloc_node+0x156/0x1a0 mm/util.c:606
> >  kvmalloc include/linux/slab.h:737 [inline]
> >  kvmalloc_array include/linux/slab.h:755 [inline]
> >  kvcalloc include/linux/slab.h:760 [inline]
> >  (codebase: Linux 6.2-rc2)
> > 
> > Don't warn such cases since high-order pages with __GFP_NOFAIL is
> > somewhat legel.
> 
> OK, this is definitely a bug and it seems my 9376130c390a was
> incomplete because it hasn't covered the high order case. Not sure how
> that happened but removing the warning is not the right thing to do
> here. The higher order allocation is an optimization rather than a must.
> So it is perfectly fine to fail that allocation and retry rather than
> go into a very expensive and potentially impossible higher order
> allocation that must not fail.
>
> 
> The proper fix should look like this unless I am missing something. I
> would appreciate another pair of eyes on this because I am not fully
> familiar with the high order optimization part much.
> 
> Thanks!
> --- 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ef910bf349e1..a8aa2765618a 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2883,6 +2883,8 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  		unsigned int order, unsigned int nr_pages, struct page **pages)
>  {
>  	unsigned int nr_allocated = 0;
> +	gfp_t alloc_gfp = gfp;
> +	bool nofail = false;
>  	struct page *page;
>  	int i;
>  
> @@ -2931,20 +2933,30 @@ vm_area_alloc_pages(gfp_t gfp, int nid,
>  			if (nr != nr_pages_request)
>  				break;
>  		}
> +	} else {
> +		alloc_gfp &= ~__GFP_NOFAIL;
> +		nofail = true;
>  	}
>  
>  	/* High-order pages or fallback path if "bulk" fails. */
> -
>  	while (nr_allocated < nr_pages) {
>  		if (fatal_signal_pending(current))
>  			break;
>  
>  		if (nid == NUMA_NO_NODE)
> -			page = alloc_pages(gfp, order);
> +			page = alloc_pages(alloc_gfp, order);
>  		else
> -			page = alloc_pages_node(nid, gfp, order);
> -		if (unlikely(!page))
> -			break;
> +			page = alloc_pages_node(nid, alloc_gfp, order);
> +		if (unlikely(!page)) {
> +			if (!nofail)
> +				break;
> +
> +			/* fall back to the zero order allocations */
> +			alloc_gfp |= __GFP_NOFAIL;
> +			order = 0;
> +			continue;
> +		}
> +
>  		/*
>  		 * Higher order allocations must be able to be treated as
>  		 * indepdenent small pages by callers (as they can with

Some questions:

1. Could you please add a comment why you want the bulk_gfp without the __GFP_NOFAIL(bulk path)?
2. Could you please add a comment why a high order pages do not want __GFP_NOFAIL? You have already explained.
3. Looking at the patch:

<snip>
+       } else {
+               alloc_gfp &= ~__GFP_NOFAIL;
+               nofail = true;
<snip>

if user does not want to go with __GFP_NOFAIL flag why you force it in
case a high order allocation fails and you switch to 0 order allocations? 
(for high order-pages scenario you always use __GFP_NOFAIL in the order-0 recovery path).

Thanks!

--
Uladzislau Rezki




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux