On Fri, Feb 28, 2025 at 12:30 PM Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote: > > Le mardi 28 janvier 2025 à 23:35 +0300, Mikhail Rudenko a écrit : > > When support for V4L2_FLAG_MEMORY_NON_CONSISTENT was removed in > > commit 129134e5415d ("media: media/v4l2: remove > > V4L2_FLAG_MEMORY_NON_CONSISTENT flag"), > > vb2_dc_dmabuf_ops_{begin,end}_cpu_access() functions were made > > no-ops. Later, when support for V4L2_MEMORY_FLAG_NON_COHERENT was > > introduced in commit c0acf9cfeee0 ("media: videobuf2: handle > > V4L2_MEMORY_FLAG_NON_COHERENT flag"), the above functions remained > > no-ops, making cache maintenance for non-coherent dmabufs allocated > > by > > dma-contig impossible. > > > > Fix this by reintroducing dma_sync_sgtable_for_{cpu,device} and > > {flush,invalidate}_kernel_vmap_range calls to > > vb2_dc_dmabuf_ops_{begin,end}_cpu_access() functions for non-coherent > > buffers. > > > > Fixes: c0acf9cfeee0 ("media: videobuf2: handle > > V4L2_MEMORY_FLAG_NON_COHERENT flag") > > Cc: stable@xxxxxxxxxxxxxxx > > Signed-off-by: Mikhail Rudenko <mike.rudenko@xxxxxxxxx> > > --- > > .../media/common/videobuf2/videobuf2-dma-contig.c | 22 > > ++++++++++++++++++++++ > > 1 file changed, 22 insertions(+) > > > > diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c > > b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > > index > > bb0b7fa67b539aa73ad5ccf3c3bc318e26f8a4cb..146d7997a0da5989fb081a6f28c > > e0641fe726e63 100644 > > --- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c > > +++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c > > @@ -427,6 +427,17 @@ static int > > vb2_dc_dmabuf_ops_begin_cpu_access(struct dma_buf *dbuf, > > enum dma_data_direction > > direction) > > { > > + struct vb2_dc_buf *buf = dbuf->priv; > > + struct sg_table *sgt = buf->dma_sgt; > > + > > + if (!buf->non_coherent_mem) > > + return 0; > > + > > + if (buf->vaddr) > > + invalidate_kernel_vmap_range(buf->vaddr, buf->size); > > Am I correct that this is mostly to prevent the kernel from reading > back old data from the cache after an application or other driver did > CPU writes ? If so, can't we restrict that to DMA_TO_DEVICE and > DMA_BIDIRECTIONAL ? Note that this function must also synchronize between the user-space and kernel mappings, where the DMA direction doesn't really matter. Also it's unlikely for it to be called when not needed - why would one begin a CPU access before the DMA, when the DMA is FROM_DEVICE? > > As for pending kernel writes, they should have been flushed before the > buffer is made available for dequeue. There is no implicit flushing for imported DMA-bufs. All the flushing needs to be executed directly by the CPU accessors by surrounding the access with begin and end CPU access, be it in the kernel or userspace. > And any access while a buffer is > queued is concurrent access, which is expected to have undefined > behaviour. > Correct. > > + > > + dma_sync_sgtable_for_cpu(buf->dev, sgt, buf->dma_dir); > > Isn't there a link to make between buf->dma_dir and direcction before > calling this ? Also, shouldn't we use direction insead of buf->dma_dir > to possibly limit the scope ? Oh, yes, that's a good catch. It should be |direction| passed here and not |buf->dma_dir|, since the former determines what CPU access will be done. > > > + > > return 0; > > } > > > > @@ -434,6 +445,17 @@ static int > > vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf, > > enum dma_data_direction direction) > > { > > + struct vb2_dc_buf *buf = dbuf->priv; > > + struct sg_table *sgt = buf->dma_sgt; > > + > > + if (!buf->non_coherent_mem) > > + return 0; > > + > > + if (buf->vaddr) > > + flush_kernel_vmap_range(buf->vaddr, buf->size); > > + > > + dma_sync_sgtable_for_device(buf->dev, sgt, buf->dma_dir); > > Similar questions for the end_cpu_access implementation. Yeah, same here. > > Nicolas > > > + > > return 0; > > } > > > > >