On čtvrtek 5. října 2023 15:56:47 CEST Matthew Wilcox (Oracle) wrote: > If the shared memory object is larger than the DRM object that it backs, > we can overrun the page array. Limit the number of pages we install > from each folio to prevent this. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > Reported-by: Oleksandr Natalenko <oleksandr@xxxxxxxxxxxxxx> > Tested-by: Oleksandr Natalenko <oleksandr@xxxxxxxxxxxxxx> > Link: https://lore.kernel.org/lkml/13360591.uLZWGnKmhe@xxxxxxxxxxxxxx/ > Fixes: 3291e09a4638 ("drm: convert drm_gem_put_pages() to use a folio_batch") > Cc: stable@xxxxxxxxxxxxxxx # 6.5.x > --- > drivers/gpu/drm/drm_gem.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c > index 6129b89bb366..44a948b80ee1 100644 > --- a/drivers/gpu/drm/drm_gem.c > +++ b/drivers/gpu/drm/drm_gem.c > @@ -540,7 +540,7 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) > struct page **pages; > struct folio *folio; > struct folio_batch fbatch; > - int i, j, npages; > + long i, j, npages; > > if (WARN_ON(!obj->filp)) > return ERR_PTR(-EINVAL); > @@ -564,11 +564,13 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj) > > i = 0; > while (i < npages) { > + long nr; > folio = shmem_read_folio_gfp(mapping, i, > mapping_gfp_mask(mapping)); > if (IS_ERR(folio)) > goto fail; > - for (j = 0; j < folio_nr_pages(folio); j++, i++) > + nr = min(npages - i, folio_nr_pages(folio)); > + for (j = 0; j < nr; j++, i++) > pages[i] = folio_file_page(folio, i); > > /* Make sure shmem keeps __GFP_DMA32 allocated pages in the > Gentle ping. It would be nice to have this picked so that it gets into the stable kernel rather sooner than later. Thanks. -- Oleksandr Natalenko (post-factum)
Attachment:
signature.asc
Description: This is a digitally signed message part.