+ cma-enforce-non-zero-pageblock_order-during-cma_init_reserved_mem.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The patch titled
     Subject: cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
has been added to the -mm mm-unstable branch.  Its filename is
     cma-enforce-non-zero-pageblock_order-during-cma_init_reserved_mem.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/cma-enforce-non-zero-pageblock_order-during-cma_init_reserved_mem.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: "Ritesh Harjani (IBM)" <ritesh.list@xxxxxxxxx>
Subject: cma: Enforce non-zero pageblock_order during cma_init_reserved_mem()
Date: Fri, 11 Oct 2024 20:26:09 +0530

cma_init_reserved_mem() checks base and size alignment with
CMA_MIN_ALIGNMENT_BYTES. However, some users might call this during
early boot when pageblock_order is 0. That means if base and size does
not have pageblock_order alignment, it can cause functional failures
during cma activate area.

So let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem().

Link: https://lkml.kernel.org/r/054b416302486c2d3fdd5924b624477929100bf6.1728656994.git.ritesh.list@xxxxxxxxx
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx>
Acked-by: David Hildenbrand <david@xxxxxxxxxx>
Acked-by: Zi Yan <ziy@xxxxxxxxxx>
Reviewed-by: Anshuman Khandual <anshuman.khandual@xxxxxxx>
Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxx>
Cc: Donet Tom <donettom@xxxxxxxxxxxxxxxxxx>
Cc: Hari Bathini <hbathini@xxxxxxxxxxxxx>
Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
Cc: Madhavan Srinivasan <maddy@xxxxxxxxxxxxx>
Cc: Mahesh Salgaonkar <mahesh@xxxxxxxxxxxxx>
Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
Cc: Sachin P Bappalige <sachinpb@xxxxxxxxxxxxx>
Cc: Sourabh Jain <sourabhjain@xxxxxxxxxxxxx>
Cc: Zi Yan <ziy@xxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/cma.c |    9 +++++++++
 1 file changed, 9 insertions(+)

--- a/mm/cma.c~cma-enforce-non-zero-pageblock_order-during-cma_init_reserved_mem
+++ a/mm/cma.c
@@ -181,6 +181,15 @@ int __init cma_init_reserved_mem(phys_ad
 	if (!size || !memblock_is_region_reserved(base, size))
 		return -EINVAL;
 
+	/*
+	 * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
+	 * needs pageblock_order to be initialized. Let's enforce it.
+	 */
+	if (!pageblock_order) {
+		pr_err("pageblock_order not yet initialized. Called during early boot?\n");
+		return -EINVAL;
+	}
+
 	/* ensure minimal alignment required by mm core */
 	if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
 		return -EINVAL;
_

Patches currently in -mm which might be from ritesh.list@xxxxxxxxx are

cma-enforce-non-zero-pageblock_order-during-cma_init_reserved_mem.patch





[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux