On Wed, 17 Jun 2020 15:34:14 -0700 Kaiyu Zhang <zhangalex@xxxxxxxxxx> wrote: > From: Alex Zhang <zhangalex@xxxxxxxxxx> > > This function implicitly assumes that the addr passed in is page aligned. > A non page aligned addr could ultimately cause a kernel bug in > remap_pte_range as the exit condition in the logic loop may never be > satisfied. This patch documents the need for the requirement, as > well as explicitly adding a check for it. > > ... > > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct mm_struct *mm, pgd_t *pgd, > /** > * remap_pfn_range - remap kernel memory to userspace > * @vma: user vma to map to > - * @addr: target user address to start at > + * @addr: target page aligned user address to start at > * @pfn: page frame number of kernel physical memory address > * @size: size of mapping area > * @prot: page protection flags for this mapping > @@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, > unsigned long remap_pfn = pfn; > int err; > > + if (!PAGE_ALIGN(addr)) > + return -EINVAL; > + That won't work. PAGE_ALIGNED() will do so. Also, as this is an error in the calling code it would be better to do if (WARN_ON_ONCE(!PAGE_ALIGNED(addr))) return -EINVAL; so that the offending code can be fixed up. Is there any code in the kernel tree which actually has this error?