5.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Andi Shyti <andi.shyti@xxxxxxxxxxxxxxx> commit 8bdd9ef7e9b1b2a73e394712b72b22055e0e26c3 upstream. Calculating the size of the mapped area as the lesser value between the requested size and the actual size does not consider the partial mapping offset. This can cause page fault access. Fix the calculation of the starting and ending addresses, the total size is now deduced from the difference between the end and start addresses. Additionally, the calculations have been rewritten in a clearer and more understandable form. Fixes: c58305af1835 ("drm/i915: Use remap_io_mapping() to prefault all PTE in a single pass") Reported-by: Jann Horn <jannh@xxxxxxxxxx> Co-developed-by: Chris Wilson <chris.p.wilson@xxxxxxxxxxxxxxx> Signed-off-by: Chris Wilson <chris.p.wilson@xxxxxxxxxxxxxxx> Signed-off-by: Andi Shyti <andi.shyti@xxxxxxxxxxxxxxx> Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> Cc: Matthew Auld <matthew.auld@xxxxxxxxx> Cc: Rodrigo Vivi <rodrigo.vivi@xxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> # v4.9+ Reviewed-by: Jann Horn <jannh@xxxxxxxxxx> Reviewed-by: Jonathan Cavitt <Jonathan.cavitt@xxxxxxxxx> [Joonas: Add Requires: tag] Requires: 60a2066c5005 ("drm/i915/gem: Adjust vma offset for framebuffer mmap offset") Signed-off-by: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> Link: https://patchwork.freedesktop.org/patch/msgid/20240802083850.103694-3-andi.shyti@xxxxxxxxxxxxxxx (cherry picked from commit 97b6784753da06d9d40232328efc5c5367e53417) Signed-off-by: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/gem/i915_gem_mman.c | 47 +++++++++++++++++++++++++++---- 1 file changed, 42 insertions(+), 5 deletions(-) --- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c @@ -196,6 +196,39 @@ compute_partial_view(const struct drm_i9 return view; } +static void set_address_limits(struct vm_area_struct *area, + struct i915_vma *vma, + unsigned long *start_vaddr, + unsigned long *end_vaddr) +{ + unsigned long vm_start, vm_end, vma_size; /* user's memory parameters */ + long start, end; /* memory boundaries */ + + /* + * Let's move into the ">> PAGE_SHIFT" + * domain to be sure not to lose bits + */ + vm_start = area->vm_start >> PAGE_SHIFT; + vm_end = area->vm_end >> PAGE_SHIFT; + vma_size = vma->size >> PAGE_SHIFT; + + /* + * Calculate the memory boundaries by considering the offset + * provided by the user during memory mapping and the offset + * provided for the partial mapping. + */ + start = vm_start; + start += vma->ggtt_view.partial.offset; + end = start + vma_size; + + start = max_t(long, start, vm_start); + end = min_t(long, end, vm_end); + + /* Let's move back into the "<< PAGE_SHIFT" domain */ + *start_vaddr = (unsigned long)start << PAGE_SHIFT; + *end_vaddr = (unsigned long)end << PAGE_SHIFT; +} + /** * i915_gem_fault - fault a page into the GTT * @vmf: fault info @@ -224,9 +257,11 @@ vm_fault_t i915_gem_fault(struct vm_faul struct intel_runtime_pm *rpm = &i915->runtime_pm; struct i915_ggtt *ggtt = &i915->ggtt; bool write = area->vm_flags & VM_WRITE; + unsigned long start, end; /* memory boundaries */ intel_wakeref_t wakeref; struct i915_vma *vma; pgoff_t page_offset; + unsigned long pfn; int srcu; int ret; @@ -295,12 +330,14 @@ vm_fault_t i915_gem_fault(struct vm_faul if (ret) goto err_unpin; + set_address_limits(area, vma, &start, &end); + + pfn = (ggtt->gmadr.start + i915_ggtt_offset(vma)) >> PAGE_SHIFT; + pfn += (start - area->vm_start) >> PAGE_SHIFT; + pfn -= vma->ggtt_view.partial.offset; + /* Finally, remap it using the new GTT offset */ - ret = remap_io_mapping(area, - area->vm_start + (vma->ggtt_view.partial.offset << PAGE_SHIFT), - (ggtt->gmadr.start + vma->node.start) >> PAGE_SHIFT, - min_t(u64, vma->size, area->vm_end - area->vm_start), - &ggtt->iomap); + ret = remap_io_mapping(area, start, pfn, end - start, &ggtt->iomap); if (ret) goto err_fence;