Re: [PATCH] memblock: config the number of init memblock regions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 11 May 2022 01:05:30 +0000 Zhou Guanghui <zhouguanghui1@xxxxxxxxxx> wrote:

> During early boot, the number of memblocks may exceed 128(some memory
> areas are not reported to the kernel due to test failures. As a result,
> contiguous memory is divided into multiple parts for reporting). If
> the size of the init memblock regions is exceeded before the array size
> can be resized, the excess memory will be lost.
> 
> ...
>
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -89,6 +89,14 @@ config SPARSEMEM_VMEMMAP
>  	  pfn_to_page and page_to_pfn operations.  This is the most
>  	  efficient option when sufficient kernel resources are available.
>  
> +config MEMBLOCK_INIT_REGIONS
> +	int "Number of init memblock regions"
> +	range 128 1024
> +	default 128
> +	help
> +	  The number of init memblock regions which used to track "memory" and
> +	  "reserved" memblocks during early boot.
> +
>  config HAVE_MEMBLOCK_PHYS_MAP
>  	bool
>  
> diff --git a/mm/memblock.c b/mm/memblock.c
> index e4f03a6e8e56..6893d26b750e 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -22,7 +22,7 @@
>  
>  #include "internal.h"
>  
> -#define INIT_MEMBLOCK_REGIONS			128
> +#define INIT_MEMBLOCK_REGIONS			CONFIG_MEMBLOCK_INIT_REGIONS

Consistent naming would be nice - MEMBLOCK_INIT versus INIT_MEMBLOCK.

Can we simply increase INIT_MEMBLOCK_REGIONS to 1024 and avoid the
config option?  It appears that the overhead from this would be 60kB or
so.  Or zero if CONFIG_ARCH_KEEP_MEMBLOCK and CONFIG_MEMORY_HOTPLUG
are cooperating.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux