On Mon, Mar 21, 2022 at 02:16:11PM -0700, Andrew Morton wrote: > > The patch titled > Subject: mm/vmalloc.c: vmap(): don't allow invalid pages > has been removed from the -mm tree. Its filename was > vmap-dont-allow-invalid-pages.patch > > This patch was dropped because an updated version will be merged Hi Andrew, Can you please clarify what updated version did you mean? Are you waiting for a v3 from me with extended patch comment, or something else? Thanks, Yury > ------------------------------------------------------ > From: Yury Norov <yury.norov@xxxxxxxxx> > Subject: mm/vmalloc.c: vmap(): don't allow invalid pages > > vmap() takes struct page *pages as one of arguments, and user may provide > an invalid pointer which would lead to data abort at address translation > later. > > Currently, kernel checks the pages against NULL. In my case, however, the > address was not NULL, and was big enough so that the hardware generated > Address Size Abort on arm64. > > Interestingly, this abort happens even if copy_from_kernel_nofault() is > used, which is quite inconvenient for debugging purposes. > > This patch adds a pfn_valid() check into vmap() path, so that invalid > mapping will not be created. > > Link: https://lkml.kernel.org/r/20220119012109.551931-1-yury.norov@xxxxxxxxx > Signed-off-by: Yury Norov <yury.norov@xxxxxxxxx> > Suggested-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > Cc: Catalin Marinas <catalin.marinas@xxxxxxx> > Cc: Will Deacon <will@xxxxxxxxxx> > Cc: Nicholas Piggin <npiggin@xxxxxxxxx> > Cc: Ding Tianhong <dingtianhong@xxxxxxxxxx> > Cc: Anshuman Khandual <anshuman.khandual@xxxxxxx> > Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> > Cc: Alexey Klimov <aklimov@xxxxxxxxxx> > Cc: Uladzislau Rezki <urezki@xxxxxxxxx> > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > --- > > mm/vmalloc.c | 2 ++ > 1 file changed, 2 insertions(+) > > --- a/mm/vmalloc.c~vmap-dont-allow-invalid-pages > +++ a/mm/vmalloc.c > @@ -478,6 +478,8 @@ static int vmap_pages_pte_range(pmd_t *p > return -EBUSY; > if (WARN_ON(!page)) > return -ENOMEM; > + if (WARN_ON(!pfn_valid(page_to_pfn(page)))) > + return -EINVAL; > set_pte_at(&init_mm, addr, pte, mk_pte(page, prot)); > (*nr)++; > } while (pte++, addr += PAGE_SIZE, addr != end); > _ > > Patches currently in -mm which might be from yury.norov@xxxxxxxxx are >