On 1/28/20 5:38 AM, Tomasz Figa wrote: > On Fri, Jan 10, 2020 at 7:13 PM Hans Verkuil <hverkuil@xxxxxxxxx> wrote: >> >> On 12/17/19 4:20 AM, Sergey Senozhatsky wrote: >>> Provide begin_cpu_access() and end_cpu_access() dma_buf_ops >>> callbacks for cache synchronisation on exported buffers. >>> >>> Signed-off-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> >>> --- >>> .../media/common/videobuf2/videobuf2-dma-sg.c | 22 +++++++++++++++++++ >>> 1 file changed, 22 insertions(+) >>> >>> diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c >>> index 6db60e9d5183..bfc99a0cb7b9 100644 >>> --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c >>> +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c >>> @@ -470,6 +470,26 @@ static void vb2_dma_sg_dmabuf_ops_release(struct dma_buf *dbuf) >>> vb2_dma_sg_put(dbuf->priv); >>> } >>> >> >> There is no corresponding vb2_sg_buffer_consistent function here. >> >> Looking more closely I see that vb2_dma_sg_alloc doesn't pass the dma_attrs >> argument to dma_map_sg_attrs, thus V4L2_FLAG_MEMORY_NON_CONSISTENT has no >> effect on dma-sg buffers. > > videobuf2-dma-sg allocates the memory using the page allocator > directly, which means that there is no memory consistency guarantee. > >> >> Is there a reason why dma_attrs isn't passed on to dma_map_sg_attrs()? >> > > V4L2_FLAG_MEMORY_NON_CONSISTENT is a flag for dma_alloc_attrs(). It > isn't supposed to do anything for dma_map_sg_attrs(), which is only > supposed to create the device (e.g. IOMMU) mapping for already > allocated memory. Ah, right. But could vb2_dma_sg_alloc_compacted() be modified so that is uses dma_alloc_attrs() instead of alloc_pages()? Sorry, that might be a stupid question, I'm not an expert in this area. All I know is that I hate inconsistent APIs where something works for one thing, but not another. Regards, Hans > >> I suspect it was just laziness in the past, and that it should be wired >> up, just as for dma-contig. >> >> Regards, >> >> Hans >> >>> +static int vb2_dma_sg_dmabuf_ops_begin_cpu_access(struct dma_buf *dbuf, >>> + enum dma_data_direction direction) >>> +{ >>> + struct vb2_dma_sg_buf *buf = dbuf->priv; >>> + struct sg_table *sgt = buf->dma_sgt; >>> + >>> + dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir); >>> + return 0; >>> +} >>> + >>> +static int vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf, >>> + enum dma_data_direction direction) >>> +{ >>> + struct vb2_dma_sg_buf *buf = dbuf->priv; >>> + struct sg_table *sgt = buf->dma_sgt; >>> + >>> + dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir); >>> + return 0; >>> +} >>> + >>> static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf) >>> { >>> struct vb2_dma_sg_buf *buf = dbuf->priv; >>> @@ -488,6 +508,8 @@ static const struct dma_buf_ops vb2_dma_sg_dmabuf_ops = { >>> .detach = vb2_dma_sg_dmabuf_ops_detach, >>> .map_dma_buf = vb2_dma_sg_dmabuf_ops_map, >>> .unmap_dma_buf = vb2_dma_sg_dmabuf_ops_unmap, >>> + .begin_cpu_access = vb2_dma_sg_dmabuf_ops_begin_cpu_access, >>> + .end_cpu_access = vb2_dma_sg_dmabuf_ops_end_cpu_access, >>> .vmap = vb2_dma_sg_dmabuf_ops_vmap, >>> .mmap = vb2_dma_sg_dmabuf_ops_mmap, >>> .release = vb2_dma_sg_dmabuf_ops_release, >>> >>