Although there is usually not such a limitation (and when there is it is often only because the driver forgot to change the super small default), it is still correct here to break scatterlist element into chunks of dma_max_mapping_size(). This might cause some issues for users with misbehaving drivers. If bisecting has landed you on this commit, make sure your drivers both set dma_set_max_seg_size() and are checking for contiguousness correctly. Signed-off-by: Andrew Davis <afd@xxxxxx> --- drivers/dma-buf/heaps/cma_heap.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c index 28fb04eccdd0..cacc84cb5ece 100644 --- a/drivers/dma-buf/heaps/cma_heap.c +++ b/drivers/dma-buf/heaps/cma_heap.c @@ -58,10 +58,11 @@ static int cma_heap_attach(struct dma_buf *dmabuf, if (!a) return -ENOMEM; - ret = sg_alloc_table_from_pages(&a->table, buffer->pages, - buffer->pagecount, 0, - buffer->pagecount << PAGE_SHIFT, - GFP_KERNEL); + size_t max_segment = dma_get_max_seg_size(attachment->dev); + ret = sg_alloc_table_from_pages_segment(&a->table, buffer->pages, + buffer->pagecount, 0, + buffer->pagecount << PAGE_SHIFT, + max_segment, GFP_KERNEL); if (ret) { kfree(a); return ret; -- 2.36.1