[merged mm-stable] cma-enforce-non-zero-pageblock_order-during-cma_init_reserved_mem.patch removed from -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The quilt patch titled
     Subject: cma: enforce non-zero pageblock_order during cma_init_reserved_mem()
has been removed from the -mm tree.  Its filename was
     cma-enforce-non-zero-pageblock_order-during-cma_init_reserved_mem.patch

This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

------------------------------------------------------
From: "Ritesh Harjani (IBM)" <ritesh.list@xxxxxxxxx>
Subject: cma: enforce non-zero pageblock_order during cma_init_reserved_mem()
Date: Wed, 13 Nov 2024 19:49:54 +0530

cma_init_reserved_mem() checks base and size alignment with
CMA_MIN_ALIGNMENT_BYTES.  However, some users might call this during early
boot when pageblock_order is 0.  That means if base and size does not have
pageblock_order alignment, it can cause functional failures during cma
activate area.

So let's enforce pageblock_order to be non-zero during
cma_init_reserved_mem() to catch such wrong usages.

1. This was seen with fadump on PowerPC which was calling
   cma_init_reserved_mem() before the pageblock_order was initialized. 
   This is now fixed in the fadump on PowerPC itself.  The details of that
   can be found in the patch including the userspace-visible effect of the
   issue [1].

2. However it was also decided that we should add a stronger
   enforcement check within cma_init_reserved_mem() to catch such wrong
   usages [2].  Hence this patch.  This is ok to be in -next and there is
   no "Fixes" tag required for this patch.

[1]: https://lore.kernel.org/all/3ae208e48c0d9cefe53d2dc4f593388067405b7d.1729146153.git.ritesh.list@xxxxxxxxx/
[2]: https://lore.kernel.org/all/83eb128e-4f06-4725-a843-a4563f246a44@xxxxxxxxxx/

Link: https://lkml.kernel.org/r/e274344b44d5f80fa54c52f530387257fe99ec65.1731505681.git.ritesh.list@xxxxxxxxx
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx>
Acked-by: David Hildenbrand <david@xxxxxxxxxx>
Acked-by: Zi Yan <ziy@xxxxxxxxxx>
Reviewed-by: Anshuman Khandual <anshuman.khandual@xxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---

 mm/cma.c |    9 +++++++++
 1 file changed, 9 insertions(+)

--- a/mm/cma.c~cma-enforce-non-zero-pageblock_order-during-cma_init_reserved_mem
+++ a/mm/cma.c
@@ -181,6 +181,15 @@ int __init cma_init_reserved_mem(phys_ad
 	if (!size || !memblock_is_region_reserved(base, size))
 		return -EINVAL;
 
+	/*
+	 * CMA uses CMA_MIN_ALIGNMENT_BYTES as alignment requirement which
+	 * needs pageblock_order to be initialized. Let's enforce it.
+	 */
+	if (!pageblock_order) {
+		pr_err("pageblock_order not yet initialized. Called during early boot?\n");
+		return -EINVAL;
+	}
+
 	/* ensure minimal alignment required by mm core */
 	if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
 		return -EINVAL;
_

Patches currently in -mm which might be from ritesh.list@xxxxxxxxx are






[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux