Quoting Tvrtko Ursulin (2017-07-27 10:05:02) > From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > > Since the scatterlist length field is an unsigned int, make > sure that sg_alloc_table_from_pages does not overflow it while > coallescing pages to a single entry. > > v2: Drop reference to future use. Use UINT_MAX. > v3: max_segment must be page aligned. > v4: Do not rely on compiler to optimise out the rounddown. > (Joonas Lahtinen) > v5: Simplified loops and use post-increments rather than > pre-increments. Use PAGE_MASK and fix comment typo. > (Andy Shevchenko) > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > Cc: Masahiro Yamada <yamada.masahiro@xxxxxxxxxxxxx> > Cc: linux-kernel@xxxxxxxxxxxxxxx > Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> (v2) > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> > Cc: Andy Shevchenko <andy.shevchenko@xxxxxxxxx> > --- > include/linux/scatterlist.h | 6 ++++++ > lib/scatterlist.c | 31 ++++++++++++++++++++----------- > 2 files changed, 26 insertions(+), 11 deletions(-) > > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h > index 205aefb4ed93..6dd2ddbc6230 100644 > --- a/include/linux/scatterlist.h > +++ b/include/linux/scatterlist.h > @@ -21,6 +21,12 @@ struct scatterlist { > }; > > /* > + * Since the above length field is an unsigned int, below we define the maximum > + * length in bytes that can be stored in one scatterlist entry. > + */ > +#define SCATTERLIST_MAX_SEGMENT (UINT_MAX & PAGE_MASK) > + > +/* > * These macros should be used after a dma_map_sg call has been done > * to get bus addresses of each of the SG entries and their lengths. > * You should only work with the number of sg entries dma_map_sg > diff --git a/lib/scatterlist.c b/lib/scatterlist.c > index dee0c5004e2f..7b2e74da2c44 100644 > --- a/lib/scatterlist.c > +++ b/lib/scatterlist.c > @@ -394,17 +394,22 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, > unsigned int offset, unsigned long size, > gfp_t gfp_mask) > { > - unsigned int chunks; > - unsigned int i; > - unsigned int cur_page; > + const unsigned int max_segment = SCATTERLIST_MAX_SEGMENT; > + unsigned int chunks, cur_page, seg_len, i; > int ret; > struct scatterlist *s; > > /* compute number of contiguous chunks */ > chunks = 1; > - for (i = 1; i < n_pages; ++i) > - if (page_to_pfn(pages[i]) != page_to_pfn(pages[i - 1]) + 1) > - ++chunks; > + seg_len = 0; > + for (i = 1; i < n_pages; i++) { > + seg_len += PAGE_SIZE; > + if (seg_len >= max_segment || > + page_to_pfn(pages[i]) != page_to_pfn(pages[i - 1]) + 1) { > + chunks++; > + seg_len = 0; > + } > + } Ok. Took a moment to realise that it works correctly for a chunk on last page. > ret = sg_alloc_table(sgt, chunks, gfp_mask); > if (unlikely(ret)) > @@ -413,17 +418,21 @@ int sg_alloc_table_from_pages(struct sg_table *sgt, > /* merging chunks and putting them into the scatterlist */ > cur_page = 0; > for_each_sg(sgt->sgl, s, sgt->orig_nents, i) { > - unsigned long chunk_size; > - unsigned int j; > + unsigned int j, chunk_size; > > /* look for the end of the current chunk */ > - for (j = cur_page + 1; j < n_pages; ++j) > - if (page_to_pfn(pages[j]) != > + seg_len = 0; > + for (j = cur_page + 1; j < n_pages; j++) { > + seg_len += PAGE_SIZE; > + if (seg_len >= max_segment || > + page_to_pfn(pages[j]) != > page_to_pfn(pages[j - 1]) + 1) > break; > + } Ok. > > chunk_size = ((j - cur_page) << PAGE_SHIFT) - offset; > - sg_set_page(s, pages[cur_page], min(size, chunk_size), offset); > + sg_set_page(s, pages[cur_page], > + min_t(unsigned long, size, chunk_size), offset); > size -= chunk_size; > offset = 0; > cur_page = j; Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx