Re: [PATCH 1/6] mm/memblock: reduce the two round insertion of memblock_add_range()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 22, 2024 at 02:55:00AM +0000, Wei Yang wrote:
> On Mon, Apr 15, 2024 at 06:17:56PM +0300, Mike Rapoport wrote:
> >On Sun, Apr 14, 2024 at 12:45:26AM +0000, Wei Yang wrote:
> >> Current memblock_add_range() does the insertion with two round
> >> iteration.
> >> 
> >> First round does the calculation of new region required, and second
> >> round does the actual insertion. Between them, if current max can't meet
> >> new region requirement, it is expanded.
> >> 
> >> The worst case is:
> >> 
> >> 1. cnt == max
> >> 2. new range overlaps all existing region
> >> 
> >> This means the final cnt should be (2 * max + 1). Since we always double
> >> the array size, this means we only need to double the array twice at the
> >> worst case, which is fairly rare. For other cases, only once array
> >> double is enough.
> >> 
> >> Let's double the array immediately when there is no room for new region.
> >> This simplify the code a little.
> >
> >Very similar patch was posted a while ago:
> >https://lore.kernel.org/all/20221025070943.363578-1-yajun.deng@xxxxxxxxx
> >
> >and it caused boot regression:
> >https://lore.kernel.org/linux-mm/Y2oLYB7Tu7J91tVm@xxxxxxxxxxxxx
> >
> 
> Got the reason for the error.
> 
> When we add range to reserved and need double array, following call happends:
> 
> memblock_add_range()
>     idx <- where we want to insert new range
>     memblock_double_array()
>         find available range for new regions
> 	memblock_reserve() available range
>     memblock_insert_region() at idx
> 
> The error case happens when find available range below what we want to add,
> the idx may change after memblock_reserve().
> 
> The following change looks could fix it.

I don't think the micro-optimization of the second loop in
memblock_add_range() worth the churn and potential bugs.
 
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 98d25689cf10..52bc9a4b20da 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -395,6 +395,7 @@ void __init memblock_discard(void)
>  /**
>   * memblock_double_array - double the size of the memblock regions array
>   * @type: memblock type of the regions array being doubled
> + * @idx: current region index if we are iterating
>   * @new_area_start: starting address of memory range to avoid overlap with
>   * @new_area_size: size of memory range to avoid overlap with
>   *
> @@ -408,6 +409,7 @@ void __init memblock_discard(void)
>   * 0 on success, -1 on failure.
>   */
>  static int __init_memblock memblock_double_array(struct memblock_type *type,
> +						int *idx,
>  						phys_addr_t new_area_start,
>  						phys_addr_t new_area_size)
>  {
> @@ -490,8 +492,24 @@ static int __init_memblock memblock_double_array(struct memblock_type *type,
>  	 * Reserve the new array if that comes from the memblock.  Otherwise, we
>  	 * needn't do it
>  	 */
> -	if (!use_slab)
> +	if (!use_slab) {
> +		/*
> +		 * When double array for reserved.regions, we may need to
> +		 * adjust the index on finding new_array below
> +		 * new_area_start. This is because memblock_reserve() below
> +		 * will insert the range ahead of us.
> +		 * While the insertion may result into three cases:
> +		 * 1. not adjacent any region, increase one index
> +		 * 2. adjacent one region, not change index
> +		 * 3. adjacent two regions, reduce one index
> +		 */
> +		int ocnt = -1;
> +		if (idx && type == &memblock.reserved && addr <= new_area_start)
> +			ocnt = type->cnt;
>  		BUG_ON(memblock_reserve(addr, new_alloc_size));
> +		if (ocnt >= 0)
> +			*idx += type->cnt - ocnt;
> +	}
>  
>  	/* Update slab flag */
>  	*in_slab = use_slab;
> 
> 
> -- 
> Wei Yang
> Help you, Help me

-- 
Sincerely yours,
Mike.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux