Re: [PATCH v3 1/1] arm64: mm: correct the inside linear map range during hotplug check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2/16/21 8:33 PM, Pavel Tatashin wrote:
> Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
> linear map range is not checked correctly.
> 
> The start physical address that linear map covers can be actually at the
> end of the range because of randomization. Check that and if so reduce it
> to 0.
> 
> This can be verified on QEMU with setting kaslr-seed to ~0ul:
> 
> memstart_offset_seed = 0xffff
> START: __pa(_PAGE_OFFSET(vabits_actual)) = ffff9000c0000000
> END:   __pa(PAGE_END - 1) =  1000bfffffff

This would have tripped the check in mhp_get_pluggable_range()
with errors something like here, which is expected.

Hotplug memory [0x680000000-0x688000000] exceeds maximum addressable range [0x0-0x0]
Hotplug memory [0x6c0000000-0x6c8000000] exceeds maximum addressable range [0x0-0x0]
Hotplug memory [0x700000000-0x708000000] exceeds maximum addressable range [0x0-0x0]
Hotplug memory [0x780000000-0x788000000] exceeds maximum addressable range [0x0-0x0]
Hotplug memory [0x7c0000000-0x7c8000000] exceeds maximum addressable range [0x0-0x0]

> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx>
> Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear mapping")
> Tested-by: Tyler Hicks <tyhicks@xxxxxxxxxxxxxxxxxxx>
> ---
>  arch/arm64/mm/mmu.c | 21 +++++++++++++++++++--
>  1 file changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index ef7698c4e2f0..0d9c115e427f 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -1447,6 +1447,22 @@ static void __remove_pgd_mapping(pgd_t *pgdir, unsigned long start, u64 size)
>  struct range arch_get_mappable_range(void)
>  {
>  	struct range mhp_range;
> +	u64 start_linear_pa = __pa(_PAGE_OFFSET(vabits_actual));
> +	u64 end_linear_pa = __pa(PAGE_END - 1);
> +
> +	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
> +		/*
> +		 * Check for a wrap, it is possible because of randomized linear
> +		 * mapping the start physical address is actually bigger than
> +		 * the end physical address. In this case set start to zero
> +		 * because [0, end_linear_pa] range must still be able to cover
> +		 * all addressable physical addresses.
> +		 */
> +		if (start_linear_pa > end_linear_pa)
> +			start_linear_pa = 0;
> +	}
> +
> +	WARN_ON(start_linear_pa > end_linear_pa);
>  
>  	/*
>  	 * Linear mapping region is the range [PAGE_OFFSET..(PAGE_END - 1)]
> @@ -1454,8 +1470,9 @@ struct range arch_get_mappable_range(void)
>  	 * range which can be mapped inside this linear mapping range, must
>  	 * also be derived from its end points.
>  	 */
> -	mhp_range.start = __pa(_PAGE_OFFSET(vabits_actual));
> -	mhp_range.end =  __pa(PAGE_END - 1);
> +	mhp_range.start = start_linear_pa;
> +	mhp_range.end =  end_linear_pa;
> +
>  	return mhp_range;
>  }

LGTM.

Reviewed-by: Anshuman Khandual <anshuman.khandual@xxxxxxx>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux