vmap() takes struct page *pages as one of arguments, and user may provide an invalid pointer which would lead to data abort at address translation later. Currently, kernel checks the pages against NULL. In my case, however, the address was not NULL, and was big enough so that the hardware generated Address Size Abort on arm64. Interestingly, this abort happens even if copy_from_kernel_nofault() is used, which is quite inconvenient for debugging purposes. This patch adds a pfn_valid() check into vmap() path, so that invalid mapping will not be created. RFC: https://lkml.org/lkml/2022/1/18/815 v1: https://lkml.org/lkml/2022/1/18/1026 v2: Patch description changed. Suggested-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Signed-off-by: Yury Norov <yury.norov@xxxxxxxxx> --- mm/vmalloc.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d2a00ad4e1dd..a4134ee56b10 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -477,6 +477,8 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr, return -EBUSY; if (WARN_ON(!page)) return -ENOMEM; + if (WARN_ON(!pfn_valid(page_to_pfn(page)))) + return -EINVAL; set_pte_at(&init_mm, addr, pte, mk_pte(page, prot)); (*nr)++; } while (pte++, addr += PAGE_SIZE, addr != end); -- 2.30.2