The patch titled Subject: mm: cma: improve pr_debug log in cma_release() has been removed from the -mm tree. Its filename was mm-cma-improve-pr_debug-log-in-cma_release.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: Charan Teja Reddy <charante@xxxxxxxxxxxxxx> Subject: mm: cma: improve pr_debug log in cma_release() It is required to print 'count' of pages, along with the pages, passed to cma_release to debug the cases of mismatched count value passed between cma_alloc() and cma_release() from a code path. As an example, consider the below scenario: 1) CMA pool size is 4MB and 2) User doing the erroneous step of allocating 2 pages but freeing 1 page in a loop from this CMA pool. The step 2 causes cma_alloc() to return NULL at one point of time because of -ENOMEM condition. And the current pr_debug logs is not giving the info about these types of allocation patterns because of count value not being printed in cma_release(). We are printing the count value in the trace logs, just extend the same to pr_debug logs too. [akpm@xxxxxxxxxxxxxxxxxxxx: fix printk warning] Link: https://lkml.kernel.org/r/1606318341-29521-1-git-send-email-charante@xxxxxxxxxxxxxx Signed-off-by: Charan Teja Reddy <charante@xxxxxxxxxxxxxx> Reviewed-by: Souptick Joarder <jrdr.linux@xxxxxxxxx> Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> Cc: Vinayak Menon <vinmenon@xxxxxxxxxxxxxx> Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/cma.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/mm/cma.c~mm-cma-improve-pr_debug-log-in-cma_release +++ a/mm/cma.c @@ -510,7 +510,7 @@ bool cma_release(struct cma *cma, const if (!cma || !pages) return false; - pr_debug("%s(page %p)\n", __func__, (void *)pages); + pr_debug("%s(page %p, count %u)\n", __func__, (void *)pages, count); pfn = page_to_pfn(pages); _ Patches currently in -mm which might be from charante@xxxxxxxxxxxxxx are