* Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> [241022 16:41]: > After an attempted mmap() fails, we are no longer in a situation where we > can safely interact with VMA hooks. This is currently not enforced, meaning > that we need complicated handling to ensure we do not incorrectly call > these hooks. > > We can avoid the whole issue by treating the VMA as suspect the moment that > the file->f_ops->mmap() function reports an error by replacing whatever VMA > operations were installed with a dummy empty set of VMA operations. > > We do so through a new helper function internal to mm - mmap_file() - which > is both more logically named than the existing call_mmap() function and > correctly isolates handling of the vm_op reassignment to mm. > > All the existing invocations of call_mmap() outside of mm are ultimately > nested within the call_mmap() from mm, which we now replace. > > It is therefore safe to leave call_mmap() in place as a convenience > function (and to avoid churn). The invokers are: > > ovl_file_operations -> mmap -> ovl_mmap() -> backing_file_mmap() > coda_file_operations -> mmap -> coda_file_mmap() > shm_file_operations -> shm_mmap() > shm_file_operations_huge -> shm_mmap() > dma_buf_fops -> dma_buf_mmap_internal -> i915_dmabuf_ops > -> i915_gem_dmabuf_mmap() > > None of these callers interact with vm_ops or mappings in a problematic way > on error, quickly exiting out. > > Reported-by: Jann Horn <jannh@xxxxxxxxxx> > Fixes: deb0f6562884 ("mm/mmap: undo ->mmap() when arch_validate_flags() fails") > Cc: stable <stable@xxxxxxxxxx> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> Reviewed-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx> > --- > mm/internal.h | 27 +++++++++++++++++++++++++++ > mm/mmap.c | 6 +++--- > mm/nommu.c | 4 ++-- > 3 files changed, 32 insertions(+), 5 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 508f7802dd2b..af032e76dfd4 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -108,6 +108,33 @@ static inline void *folio_raw_mapping(const struct folio *folio) > return (void *)(mapping & ~PAGE_MAPPING_FLAGS); > } > > +/* > + * This is a file-backed mapping, and is about to be memory mapped - invoke its > + * mmap hook and safely handle error conditions. On error, VMA hooks will be > + * mutated. > + * > + * @file: File which backs the mapping. > + * @vma: VMA which we are mapping. > + * > + * Returns: 0 if success, error otherwise. > + */ > +static inline int mmap_file(struct file *file, struct vm_area_struct *vma) > +{ > + int err = call_mmap(file, vma); > + > + if (likely(!err)) > + return 0; > + > + /* > + * OK, we tried to call the file hook for mmap(), but an error > + * arose. The mapping is in an inconsistent state and we most not invoke > + * any further hooks on it. > + */ > + vma->vm_ops = &vma_dummy_vm_ops; > + > + return err; > +} > + > #ifdef CONFIG_MMU > > /* Flags for folio_pte_batch(). */ > diff --git a/mm/mmap.c b/mm/mmap.c > index 1ba0878bbc30..10f4ccaf491b 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -1425,7 +1425,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, > /* > * clear PTEs while the vma is still in the tree so that rmap > * cannot race with the freeing later in the truncate scenario. > - * This is also needed for call_mmap(), which is why vm_ops > + * This is also needed for mmap_file(), which is why vm_ops > * close function is called. > */ > vms_clean_up_area(&vms, &mas_detach); > @@ -1450,7 +1450,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, > > if (file) { > vma->vm_file = get_file(file); > - error = call_mmap(file, vma); > + error = mmap_file(file, vma); > if (error) > goto unmap_and_free_vma; > > @@ -1473,7 +1473,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, > > vma_iter_config(&vmi, addr, end); > /* > - * If vm_flags changed after call_mmap(), we should try merge > + * If vm_flags changed after mmap_file(), we should try merge > * vma again as we may succeed this time. > */ > if (unlikely(vm_flags != vma->vm_flags && vmg.prev)) { > diff --git a/mm/nommu.c b/mm/nommu.c > index 385b0c15add8..f9ccc02458ec 100644 > --- a/mm/nommu.c > +++ b/mm/nommu.c > @@ -885,7 +885,7 @@ static int do_mmap_shared_file(struct vm_area_struct *vma) > { > int ret; > > - ret = call_mmap(vma->vm_file, vma); > + ret = mmap_file(vma->vm_file, vma); > if (ret == 0) { > vma->vm_region->vm_top = vma->vm_region->vm_end; > return 0; > @@ -918,7 +918,7 @@ static int do_mmap_private(struct vm_area_struct *vma, > * happy. > */ > if (capabilities & NOMMU_MAP_DIRECT) { > - ret = call_mmap(vma->vm_file, vma); > + ret = mmap_file(vma->vm_file, vma); > /* shouldn't return success if we're not sharing */ > if (WARN_ON_ONCE(!is_nommu_shared_mapping(vma->vm_flags))) > ret = -ENOSYS; > -- > 2.47.0