On Mon, Aug 15, 2016 at 04:42:18PM +0100, Chris Wilson wrote: > Rendering operations to the dma-buf are tracked implicitly via the > reservation_object (dmabuf->resv). This is used to allow poll() to > wait upon outstanding rendering (or just query the current status of > rendering). The dma-buf sync ioctl allows userspace to prepare the > dma-buf for CPU access, which should include waiting upon rendering. > (Some drivers may need to do more work to ensure that the dma-buf mmap > is coherent as well as complete.) > > v2: Always wait upon the reservation object implicitly. We choose to do > it after the native handler in case it can do so more efficiently. > > Testcase: igt/prime_vgem > Testcase: igt/gem_concurrent_blit # *vgem* > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Sumit Semwal <sumit.semwal@xxxxxxxxxx> > Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> > Cc: Eric Anholt <eric@xxxxxxxxxx> > Cc: linux-media@xxxxxxxxxxxxxxx > Cc: dri-devel@xxxxxxxxxxxxxxxxxxxxx > Cc: linaro-mm-sig@xxxxxxxxxxxxxxxx > Cc: linux-kernel@xxxxxxxxxxxxxxx > --- > drivers/dma-buf/dma-buf.c | 23 +++++++++++++++++++++++ > 1 file changed, 23 insertions(+) > > diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c > index ddaee60ae52a..cf04d249a6a4 100644 > --- a/drivers/dma-buf/dma-buf.c > +++ b/drivers/dma-buf/dma-buf.c > @@ -586,6 +586,22 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, > } > EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment); > > +static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > + enum dma_data_direction direction) > +{ > + bool write = (direction == DMA_BIDIRECTIONAL || > + direction == DMA_TO_DEVICE); > + struct reservation_object *resv = dmabuf->resv; > + long ret; > + > + /* Wait on any implicit rendering fences */ > + ret = reservation_object_wait_timeout_rcu(resv, write, true, > + MAX_SCHEDULE_TIMEOUT); > + if (ret < 0) > + return ret; > + > + return 0; > +} > > /** > * dma_buf_begin_cpu_access - Must be called before accessing a dma_buf from the > @@ -608,6 +624,13 @@ int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, > if (dmabuf->ops->begin_cpu_access) > ret = dmabuf->ops->begin_cpu_access(dmabuf, direction); > > + /* Ensure that all fences are waited upon - but we first allow > + * the native handler the chance to do so more efficiently if it > + * chooses. A double invocation here will be reasonably cheap no-op. > + */ > + if (ret == 0) > + ret = __dma_buf_begin_cpu_access(dmabuf, direction); Not sure we should wait first and the flush or the other way round. But I don't think it'll matter for any current dma-buf exporter, so meh. Reviewed-by: Daniel Vetter <daniel.vetter@xxxxxxxx> Sumits, can you pls pick this one up and put into drm-misc? -Daniel > + > return ret; > } > EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access); > -- > 2.8.1 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel