This is a note to let you know that I've just added the patch titled i915: Limit the length of an sg list to the requested length to the 6.5-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: i915-limit-the-length-of-an-sg-list-to-the-requested-length.patch and it can be found in the queue-6.5 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 863a8eb3f27098b42772f668e3977ff4cae10b04 Mon Sep 17 00:00:00 2001 From: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Date: Tue, 19 Sep 2023 20:48:55 +0100 Subject: i915: Limit the length of an sg list to the requested length From: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> commit 863a8eb3f27098b42772f668e3977ff4cae10b04 upstream. The folio conversion changed the behaviour of shmem_sg_alloc_table() to put the entire length of the last folio into the sg list, even if the sg list should have been shorter. gen8_ggtt_insert_entries() relied on the list being the right length and would overrun the end of the page tables. Other functions may also have been affected. Clamp the length of the last entry in the sg list to be the expected length. Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> Fixes: 0b62af28f249 ("i915: convert shmem_sg_free_table() to use a folio_batch") Cc: stable@xxxxxxxxxxxxxxx # 6.5.x Link: https://gitlab.freedesktop.org/drm/intel/-/issues/9256 Link: https://lore.kernel.org/lkml/6287208.lOV4Wx5bFT@xxxxxxxxxxxxxx/ Reported-by: Oleksandr Natalenko <oleksandr@xxxxxxxxxxxxxx> Tested-by: Oleksandr Natalenko <oleksandr@xxxxxxxxxxxxxx> Reviewed-by: Andrzej Hajda <andrzej.hajda@xxxxxxxxx> Signed-off-by: Andrzej Hajda <andrzej.hajda@xxxxxxxxx> Link: https://patchwork.freedesktop.org/patch/msgid/20230919194855.347582-1-willy@xxxxxxxxxxxxx (cherry picked from commit 26a8e32e6d77900819c0c730fbfb393692dbbeea) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@xxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c index 8f1633c3fb93..73a4a4eb29e0 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c @@ -100,6 +100,7 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, st->nents = 0; for (i = 0; i < page_count; i++) { struct folio *folio; + unsigned long nr_pages; const unsigned int shrink[] = { I915_SHRINK_BOUND | I915_SHRINK_UNBOUND, 0, @@ -150,6 +151,8 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, } } while (1); + nr_pages = min_t(unsigned long, + folio_nr_pages(folio), page_count - i); if (!i || sg->length >= max_segment || folio_pfn(folio) != next_pfn) { @@ -157,13 +160,13 @@ int shmem_sg_alloc_table(struct drm_i915_private *i915, struct sg_table *st, sg = sg_next(sg); st->nents++; - sg_set_folio(sg, folio, folio_size(folio), 0); + sg_set_folio(sg, folio, nr_pages * PAGE_SIZE, 0); } else { /* XXX: could overflow? */ - sg->length += folio_size(folio); + sg->length += nr_pages * PAGE_SIZE; } - next_pfn = folio_pfn(folio) + folio_nr_pages(folio); - i += folio_nr_pages(folio) - 1; + next_pfn = folio_pfn(folio) + nr_pages; + i += nr_pages - 1; /* Check that the i965g/gm workaround works. */ GEM_BUG_ON(gfp & __GFP_DMA32 && next_pfn >= 0x00100000UL); -- 2.42.0 Patches currently in stable-queue which might be from willy@xxxxxxxxxxxxx are queue-6.5/proc-nommu-fix-empty-proc-pid-maps.patch queue-6.5/mm-mempolicy-keep-vma-walk-if-both-mpol_mf_strict-and-mpol_mf_move-are-specified.patch queue-6.5/i915-limit-the-length-of-an-sg-list-to-the-requested-length.patch