On Fri, 4 Jun 2021, Matthew Wilcox wrote: > On Thu, Jun 03, 2021 at 02:40:30PM -0700, Hugh Dickins wrote: > > static inline unsigned long > > -__vma_address(struct page *page, struct vm_area_struct *vma) > > +vma_address(struct page *page, struct vm_area_struct *vma) > > { > > - pgoff_t pgoff = page_to_pgoff(page); > > - return vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); > > + pgoff_t pgoff; > > + unsigned long address; > > + > > + VM_BUG_ON_PAGE(PageKsm(page), page); /* KSM page->index unusable */ > > + pgoff = page_to_pgoff(page); > > + if (pgoff >= vma->vm_pgoff) { > > + address = vma->vm_start + > > + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT); > > + /* Check for address beyond vma (or wrapped through 0?) */ > > + if (address < vma->vm_start || address >= vma->vm_end) > > + address = -EFAULT; > > + } else if (PageHead(page) && > > + pgoff + compound_nr(page) > vma->vm_pgoff) { > > I think on 32-bit, you need ... > > pgoff + compound_nr(page) - 1 >= vma->vm_pgoff > > ... right? Hey, beating me at my own game ;-) I'm pretty sure you're right (and it's true that I first wrote this patch before becoming conscious of the 32-bit MAX_LFS_FILESIZE issue); but caution tells me to think some more and check some places before committing to that. Thanks, Hugh