From: Vincenzo Frascino <vincenzo.frascino@xxxxxxx> commit 519ea6f1c82fcdc9842908155ae379de47818778 upstream. Currently, the __is_lm_address() check just masks out the top 12 bits of the address, but if they are 0, it still yields a true result. This has as a side effect that virt_addr_valid() returns true even for invalid virtual addresses (e.g. 0x0). Fix the detection checking that it's actually a kernel address starting at PAGE_OFFSET. Fixes: 68dd8ef32162 ("arm64: memory: Fix virt_addr_valid() using __is_lm_address()") Cc: <stable@xxxxxxxxxxxxxxx> # 5.4.x Cc: Will Deacon <will@xxxxxxxxxx> Suggested-by: Catalin Marinas <catalin.marinas@xxxxxxx> Reviewed-by: Catalin Marinas <catalin.marinas@xxxxxxx> Acked-by: Mark Rutland <mark.rutland@xxxxxxx> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@xxxxxxx> Link: https://lore.kernel.org/r/20210126134056.45747-1-vincenzo.frascino@xxxxxxx Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx> --- arch/arm64/include/asm/memory.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 51d867cf146c..a77a2ae864e3 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -247,11 +247,11 @@ static inline const void *__tag_set(const void *addr, u8 tag) /* - * The linear kernel range starts at the bottom of the virtual address - * space. Testing the top bit for the start of the region is a - * sufficient check and avoids having to worry about the tag. + * Check whether an arbitrary address is within the linear map, which + * lives in the [PAGE_OFFSET, PAGE_END) interval at the bottom of the + * kernel's TTBR1 address range. */ -#define __is_lm_address(addr) (!(((u64)addr) & BIT(vabits_actual - 1))) +#define __is_lm_address(addr) (((u64)(addr) ^ PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET)) #define __lm_to_phys(addr) (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET) #define __kimg_to_phys(addr) ((addr) - kimage_voffset)