> -----Original Message----- > From: Barry Song <baohua@xxxxxxxxxx> > Sent: Thursday, June 13, 2024 5:37 AM > To: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Zhai He <zhai.he@xxxxxxx>; sboyd@xxxxxxxxxx; linux-mm@xxxxxxxxx; > linux-kernel@xxxxxxxxxxxxxxx; stable@xxxxxxxxxxxxxxx; Zhipeng Wang > <zhipeng.wang_1@xxxxxxx>; Jindong Yue <jindong.yue@xxxxxxx>; Christoph > Hellwig <hch@xxxxxx> > Subject: [EXT] Re: [PATCH v2] Supports to use the default CMA when the > device-specified CMA memory is not enough. > > Caution: This is an external email. Please take care when clicking links or > opening attachments. When in doubt, report the message using the 'Report this > email' button > > > On Thu, Jun 13, 2024 at 6:47 AM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > wrote: > > > > On Wed, 12 Jun 2024 16:12:16 +0800 "zhai.he" <zhai.he@xxxxxxx> wrote: > > > > > From: He Zhai <zhai.he@xxxxxxx> > > > > (cc Barry & Christoph) > > > > What was your reason for adding cc:stable to the email headers? Does > > this address some serious problem? If so, please fully describe that > > problem. > > > > > In the current code logic, if the device-specified CMA memory > > > allocation fails, memory will not be allocated from the default CMA area. > > > This patch will use the default cma region when the device's > > > specified CMA is not enough. > > > > > > In addition, the log level of allocation failure is changed to debug. > > > Because these logs will be printed when memory allocation from the > > > device specified CMA fails, but if the allocation fails, it will be > > > allocated from the default cma area. It can easily mislead developers' > > > judgment. > > I am not convinced that this patch is correct. If device-specific CMA is too small, > why not increase it in the device tree? Conversely, if the default CMA size is too > large, why not reduce it via the cmdline? CMA offers all kinds of flexible > configuration options based on users’ needs. > > One significant benefit of device-specific CMA is that it helps decrease > fragmentation in the common CMA pool. While many devices allocate memory > from the same pool, they have different memory requirements in terms of sizes > and alignments. Occasions of memory allocation and release can lead to > situations where the CMA pool has enough free space, yet someone fails to > obtain contiguous memory from it. > > This patch entirely negates the advantage we gain from device-specific CMA. > My point is that instead of modifying the core code, please consider correcting > your device tree or cmdline configurations. > Because we enabled secure heap to support widevine DRM, and secure heap requires security configuration, its starting address cannot be specified arbitrarily, which causes the default CMA to be reduced. So we reduced the CMA, but in order to avoid the impact of reducing the CMA, we used a multi-segment CMA and gave one segment to the VPU. However, under our memory configuration, the device-specific CMA is not enough to support the VPU decoding high-resolution code streams, so this patch is added so that the VPU can work properly. Thanks. > > > > > > ... > > > > > > --- a/kernel/dma/contiguous.c > > > +++ b/kernel/dma/contiguous.c > > > @@ -357,8 +357,13 @@ struct page *dma_alloc_contiguous(struct device > *dev, size_t size, gfp_t gfp) > > > /* CMA can be used only in the context which permits sleeping */ > > > if (!gfpflags_allow_blocking(gfp)) > > > return NULL; > > > - if (dev->cma_area) > > > - return cma_alloc_aligned(dev->cma_area, size, gfp); > > > + if (dev->cma_area) { > > > + struct page *page = NULL; > > > + > > > + page = cma_alloc_aligned(dev->cma_area, size, gfp); > > > + if (page) > > > + return page; > > > + } > > > if (size <= PAGE_SIZE) > > > return NULL; > > > > The dma_alloc_contiguous() kerneldoc should be updated for this. > > > > The patch prompts the question "why does the device-specified CMA area > > exist?". Why not always allocate from the global pool? If the > > device-specified area exists to prevent one device from going crazy > > and consuming too much contiguous memory, this patch violates that intent? > > > > > @@ -406,6 +411,8 @@ void dma_free_contiguous(struct device *dev, > struct page *page, size_t size) > > > if (dev->cma_area) { > > > if (cma_release(dev->cma_area, page, count)) > > > return; > > > + if (cma_release(dma_contiguous_default_area, page, > count)) > > > + return; > > > } else { > > > /* > > > * otherwise, page is from either per-numa cma or > > > default cma diff --git a/mm/cma.c b/mm/cma.c index > > > 3e9724716bad..6e12faf1bea7 100644 > > > --- a/mm/cma.c > > > +++ b/mm/cma.c > > > @@ -495,8 +495,8 @@ struct page *cma_alloc(struct cma *cma, unsigned > long count, > > > } > > > > > > if (ret && !no_warn) { > > > - pr_err_ratelimited("%s: %s: alloc failed, req-size: %lu pages, > ret: %d\n", > > > - __func__, cma->name, count, ret); > > > + pr_debug("%s: alloc failed, req-size: %lu pages, ret: %d, try > to use default cma\n", > > > + cma->name, count, ret); > > > cma_debug_show_areas(cma); > > > } > > > > Thanks > Barry
Attachment:
smime.p7s
Description: S/MIME cryptographic signature