The patch titled Subject: mm/memory.c: make remap_pfn_range() reject unaligned addr has been added to the -mm tree. Its filename is mm-memoryc-make-remap_pfn_range-reject-unaligned-addr.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-memoryc-make-remap_pfn_range-reject-unaligned-addr.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-memoryc-make-remap_pfn_range-reject-unaligned-addr.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Alex Zhang <zhangalex@xxxxxxxxxx> Subject: mm/memory.c: make remap_pfn_range() reject unaligned addr This function implicitly assumes that the addr passed in is page aligned. A non page aligned addr could ultimately cause a kernel bug in remap_pte_range as the exit condition in the logic loop may never be satisfied. This patch documents the need for the requirement, as well as explicitly adds a check for it. Link: http://lkml.kernel.org/r/20200617233512.177519-1-zhangalex@xxxxxxxxxx Signed-off-by: Alex Zhang <zhangalex@xxxxxxxxxx> Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/memory.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) --- a/mm/memory.c~mm-memoryc-make-remap_pfn_range-reject-unaligned-addr +++ a/mm/memory.c @@ -2081,7 +2081,7 @@ static inline int remap_p4d_range(struct /** * remap_pfn_range - remap kernel memory to userspace * @vma: user vma to map to - * @addr: target user address to start at + * @addr: target page aligned user address to start at * @pfn: page frame number of kernel physical memory address * @size: size of mapping area * @prot: page protection flags for this mapping @@ -2100,6 +2100,9 @@ int remap_pfn_range(struct vm_area_struc unsigned long remap_pfn = pfn; int err; + if (WARN_ON_ONCE(!PAGE_ALIGNED(addr))) + return -EINVAL; + /* * Physically remapped pages are special. Tell the * rest of the world about it: _ Patches currently in -mm which might be from zhangalex@xxxxxxxxxx are mm-memoryc-make-remap_pfn_range-reject-unaligned-addr.patch