Hi Mikhail, On Thu, Jan 16, 2025 at 2:25 AM Mikhail Rudenko <mike.rudenko@xxxxxxxxx> wrote: > > When support for V4L2_FLAG_MEMORY_NON_CONSISTENT was removed in > commit 129134e5415d ("media: media/v4l2: remove > V4L2_FLAG_MEMORY_NON_CONSISTENT flag"), > vb2_dc_dmabuf_ops_{begin,end}_cpu_access() functions were made > no-ops. Later, when support for V4L2_MEMORY_FLAG_NON_COHERENT was > introduced in commit c0acf9cfeee0 ("media: videobuf2: handle > V4L2_MEMORY_FLAG_NON_COHERENT flag"), the above functions remained > no-ops, making cache maintenance for non-coherent dmabufs allocated by > dma-contig impossible. > > Fix this by reintroducing dma_sync_sg_for_{cpu,device} calls to > vb2_dc_dmabuf_ops_{begin,end}_cpu_access() functions for non-coherent > buffers. > > Fixes: c0acf9cfeee0 ("media: videobuf2: handle V4L2_MEMORY_FLAG_NON_COHERENT flag") > Cc: stable@xxxxxxxxxxxxxxx > Signed-off-by: Mikhail Rudenko <mike.rudenko@xxxxxxxxx> > --- > drivers/media/common/videobuf2/videobuf2-dma-contig.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > Thanks a lot for the patch! Sorry, for the delay. Ended up being sick with some nasty cold that took quite a while to recover. Please take a look at my comments inline. > diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > index bb0b7fa67b539aa73ad5ccf3c3bc318e26f8a4cb..889d6c11e15ab5cd4b4c317e865f1fef92df4b66 100644 > --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c > +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > @@ -427,6 +427,13 @@ static int > vb2_dc_dmabuf_ops_begin_cpu_access(struct dma_buf *dbuf, > enum dma_data_direction direction) > { > + struct vb2_dc_buf *buf = dbuf->priv; > + struct sg_table *sgt = buf->dma_sgt; > + > + if (!buf->non_coherent_mem || buf->vb->skip_cache_sync_on_finish) skip_cache_sync_on_finish shouldn't apply to this function, because the buffer was shared with an external DMA-buf importer and we don't know in what state it is at this point. > + return 0; > + We should also take care of the kernel mapping if it exists, because on some platforms it may not be coherent with the userspace one - using flush_kernel_vmap_range(). Please check how vb2_dc_prepare()/vb2_dc_finish() do it. > + dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir); We can use the dma_sync_sgtable_*() variant here so we can just pass the entire sgt to it. > return 0; > } > > @@ -434,6 +441,13 @@ static int > vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf, > enum dma_data_direction direction) > { > + struct vb2_dc_buf *buf = dbuf->priv; > + struct sg_table *sgt = buf->dma_sgt; > + > + if (!buf->non_coherent_mem || buf->vb->skip_cache_sync_on_prepare) > + return 0; > + > + dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir); > return 0; Overall the same comments here as for vb2_dc_dmabuf_ops_begin_cpu_access() +/- flush would change to invalidate. Best regards, Tomasz