6.5-stable review patch. If anyone has any objections, please let me know. ------------------ From: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> commit b7fd68ab1538e3adb665670414bea440f399fda9 upstream. If the shared memory object is larger than the DRM object that it backs, we can overrun the page array. Limit the number of pages we install from each folio to prevent this. Signed-off-by: "Matthew Wilcox (Oracle)" <willy@xxxxxxxxxxxxx> Reported-by: Oleksandr Natalenko <oleksandr@xxxxxxxxxxxxxx> Tested-by: Oleksandr Natalenko <oleksandr@xxxxxxxxxxxxxx> Link: https://lore.kernel.org/lkml/13360591.uLZWGnKmhe@xxxxxxxxxxxxxx/ Fixes: 3291e09a4638 ("drm: convert drm_gem_put_pages() to use a folio_batch") Cc: stable@xxxxxxxxxxxxxxx # 6.5.x Signed-off-by: Maxime Ripard <mripard@xxxxxxxxxx> Link: https://patchwork.freedesktop.org/patch/msgid/20231005135648.2317298-1-willy@xxxxxxxxxxxxx Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/drm_gem.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -537,7 +537,7 @@ struct page **drm_gem_get_pages(struct d struct page **pages; struct folio *folio; struct folio_batch fbatch; - int i, j, npages; + long i, j, npages; if (WARN_ON(!obj->filp)) return ERR_PTR(-EINVAL); @@ -561,11 +561,13 @@ struct page **drm_gem_get_pages(struct d i = 0; while (i < npages) { + long nr; folio = shmem_read_folio_gfp(mapping, i, mapping_gfp_mask(mapping)); if (IS_ERR(folio)) goto fail; - for (j = 0; j < folio_nr_pages(folio); j++, i++) + nr = min(npages - i, folio_nr_pages(folio)); + for (j = 0; j < nr; j++, i++) pages[i] = folio_file_page(folio, i); /* Make sure shmem keeps __GFP_DMA32 allocated pages in the