Re: [PATCH v2] Supports to use the default CMA when the device-specified CMA memory is not enough.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 12 Jun 2024 16:12:16 +0800 "zhai.he" <zhai.he@xxxxxxx> wrote:

> From: He Zhai <zhai.he@xxxxxxx>

(cc Barry & Christoph)

What was your reason for adding cc:stable to the email headers?  Does
this address some serious problem?  If so, please fully describe that
problem.

> In the current code logic, if the device-specified CMA memory
> allocation fails, memory will not be allocated from the default CMA area.
> This patch will use the default cma region when the device's
> specified CMA is not enough.
> 
> In addition, the log level of allocation failure is changed to debug.
> Because these logs will be printed when memory allocation from the
> device specified CMA fails, but if the allocation fails, it will be
> allocated from the default cma area. It can easily mislead developers'
> judgment.
>
> ...
>
> --- a/kernel/dma/contiguous.c
> +++ b/kernel/dma/contiguous.c
> @@ -357,8 +357,13 @@ struct page *dma_alloc_contiguous(struct device *dev, size_t size, gfp_t gfp)
>  	/* CMA can be used only in the context which permits sleeping */
>  	if (!gfpflags_allow_blocking(gfp))
>  		return NULL;
> -	if (dev->cma_area)
> -		return cma_alloc_aligned(dev->cma_area, size, gfp);
> +	if (dev->cma_area) {
> +		struct page *page = NULL;
> +
> +		page = cma_alloc_aligned(dev->cma_area, size, gfp);
> +		if (page)
> +			return page;
> +	}
>  	if (size <= PAGE_SIZE)
>  		return NULL;

The dma_alloc_contiguous() kerneldoc should be updated for this.

The patch prompts the question "why does the device-specified CMA area
exist?".  Why not always allocate from the global pool?  If the
device-specified area exists to prevent one device from going crazy and
consuming too much contiguous memory, this patch violates that intent?

> @@ -406,6 +411,8 @@ void dma_free_contiguous(struct device *dev, struct page *page, size_t size)
>  	if (dev->cma_area) {
>  		if (cma_release(dev->cma_area, page, count))
>  			return;
> +		if (cma_release(dma_contiguous_default_area, page, count))
> +			return;
>  	} else {
>  		/*
>  		 * otherwise, page is from either per-numa cma or default cma
> diff --git a/mm/cma.c b/mm/cma.c
> index 3e9724716bad..6e12faf1bea7 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -495,8 +495,8 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
>  	}
>  
>  	if (ret && !no_warn) {
> -		pr_err_ratelimited("%s: %s: alloc failed, req-size: %lu pages, ret: %d\n",
> -				   __func__, cma->name, count, ret);
> +		pr_debug("%s: alloc failed, req-size: %lu pages, ret: %d, try to use default cma\n",
> +			    cma->name, count, ret);
>  		cma_debug_show_areas(cma);
>  	}





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux