Re: [PATCH v2] mm: hugetlb: optionally allocate gigantic hugepages using cma

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 09-03-20 17:25:24, Roman Gushchin wrote:
[...]
> 2) Run-time allocations of gigantic hugepages are performed using the
>    cma allocator and the dedicated cma area

[...]
> @@ -1237,6 +1246,23 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
>  {
>  	unsigned long nr_pages = 1UL << huge_page_order(h);
>  
> +	if (IS_ENABLED(CONFIG_CMA) && hugetlb_cma[0]) {
> +		struct page *page;
> +		int nid;
> +
> +		for_each_node_mask(nid, *nodemask) {
> +			if (!hugetlb_cma[nid])
> +				break;
> +
> +			page = cma_alloc(hugetlb_cma[nid], nr_pages,
> +					 huge_page_order(h), true);
> +			if (page)
> +				return page;
> +		}
> +
> +		return NULL;

Is there any strong reason why the alloaction annot fallback to non-CMA
allocator when the cma is depleted?

> +	}
> +
>  	return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask);
>  }

-- 
Michal Hocko
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux