On 9/23/2021 3:54 PM, Chris Goldsworthy wrote:
From: Sudarshan Rajagopalan <quic_sudaraja@xxxxxxxxxxx>
After new memory blocks have been hotplugged, max_pfn and max_low_pfn
needs updating to reflect on new PFNs being hot added to system.
Signed-off-by: Sudarshan Rajagopalan <quic_sudaraja@xxxxxxxxxxx>
Signed-off-by: Chris Goldsworthy <quic_cgoldswo@xxxxxxxxxxx>
---
arch/arm64/mm/mmu.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index cfd9deb..fd85b51 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1499,6 +1499,11 @@ int arch_add_memory(int nid, u64 start, u64 size,
if (ret)
__remove_pgd_mapping(swapper_pg_dir,
__phys_to_virt(start), size);
+ else {
+ max_pfn = PFN_UP(start + size);
+ max_low_pfn = max_pfn;
+ }
This is a drive by review, but it got me thinking about your changes a bit:
- if you raise max_pfn when you hotplug memory, don't you need to lower
it when you hot unplug memory as well?
- suppose that you have a platform which maps physical memory into the
CPU's address space at 0x00_4000_0000 (1GB offset) and the kernel boots
with 2GB of DRAM plugged by default. At that point we have not
registered a swiotlb because we have less than 4GB of addressable
physical memory, there is no IOMMU in that system, it's a happy world.
Now assume that we plug an additional 2GB of DRAM into that system
adjacent to the previous 2GB, from 0x00_C0000_0000 through
0x14_0000_0000, now we have physical addresses above 4GB, but we still
don't have a swiotlb, some of our DMA_BIT_MASK(32) peripherals are going
to be unable to DMA from that hot plugged memory, but they could if we
had a swiotlb.
- now let's go even further but this is very contrived. Assume that the
firmware has somewhat created a reserved memory region with a 'no-map'
attribute thus indicating it does not want a struct page to be created
for a specific PFN range, is it valid to "blindly" raise max_pfn if that
region were to be at the end of the just hot-plugged memory?
--
Florian