On Fri, Apr 22, 2022 at 08:48:25AM -0500, Rob Herring wrote: > On Wed, Apr 20, 2022 at 4:09 PM Patrick Daly <quic_pdaly@xxxxxxxxxxx> wrote: > > > > The reserved_mem array must be statically allocated because it is used > > prior to memblock being aware of all "no-map" or otherwise reserved > > regions which have fixed physical addresses. Due to this limitation, > > if one architecture/board has a large number of reserved_mem regions, > > this limit must be raised for all. > > > > In particular, certain new qcom boards currently have 63 reserved memory > > regions, which when new features are added, pushes them over the existing > > limit of 64. > > Please revive this instead: > > https://lore.kernel.org/all/20211119075844.2902592-3-calvinzhang.cool@xxxxxxxxx/ Looks interesting, thanks for pointing it out. > > > > > A generalized breakdown by region type: > > 13 for linux-loaded device firmware > > If loaded by linux, why do you need fixed carveouts in DT? The devices > can't be told what address their fw is at? > These are good questions, but I don't know enough about the various usecases to give complete answers. > > 9 for guest-vms or inter-vm communication > > Why does that need to be in DT? Its a type-one hypervisor, so the guest VMs don't run in userspace. We want to use largely physically contiguous memory to reduce TLB pressure - so a devicetree carveout seemed to be a simple way of realizing this. > > > 15 cma heaps/dma-buf heaps > > Sounds like not trusting the OS to allocate memory itself. From what > I've read, the kernel's memory allocation abilities are better now > than when CMA was added. Greater than MAX_ORDER physically contiguous memory is only available through CMA. So it comes down to whether the usecase supports scattered memory or not. I believe there is also a concern that placing many usecases in the same CMA area could lead to fragmentation & allocation failures for sizes which are a large percentage of the total CMA area size. > > > 24 for bootloaders/hypervisor/secure-world devices or software > > 2 misc > > > > Although this number could be reduced by a minor amount by combining > > physically adjacent regions, this comes at the cost of losing > > documention on what/who the regions are used by. In addition, combining > > adjacent regions is not possible if there are phandles in devicetree > > referring to the regions in question, such as "memory-region". > > > > Vmlinux before: > > text data bss dec hex filename > > 31030829 15807732 588524 47427085 2d3ae0d dist/vmlinux > > > > Vmlinux after: > > text data bss dec hex filename > > 31030877 15807668 592108 47430653 2d3bbfd dist/vmlinux > > > > Signed-off-by: Patrick Daly <quic_pdaly@xxxxxxxxxxx> > > --- > > drivers/of/of_reserved_mem.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c > > index 75caa6f..de0cdda 100644 > > --- a/drivers/of/of_reserved_mem.c > > +++ b/drivers/of/of_reserved_mem.c > > @@ -26,7 +26,7 @@ > > > > #include "of_private.h" > > > > -#define MAX_RESERVED_REGIONS 64 > > +#define MAX_RESERVED_REGIONS 128 > > static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS]; > > static int reserved_mem_count; > > > > -- > > 2.7.4 > >