There's only one exclusive slot, and we must not break the ordering. Adding a new exclusive fence drops all previous fences from the dma_resv. To avoid violating the signalling order we err on the side of over-synchronizing by waiting for the existing fences, even if userspace asked us to ignore them. A better fix would be to us a dma_fence_chain or _array like e.g. amdgpu now uses, but it probably makes sense to lift this into dma-resv.c code as a proper concept, so that drivers don't have to hack up their own solution each on their own. Hence go with the simple fix for now. Another option is the fence import ioctl from Jason: https://lore.kernel.org/dri-devel/20210610210925.642582-7-jason@xxxxxxxxxxxxxx/ v2: Improve commit message per Lucas' suggestion. Cc: Lucas Stach <l.stach@xxxxxxxxxxxxxx> Signed-off-by: Daniel Vetter <daniel.vetter@xxxxxxxxx> Cc: Maarten Lankhorst <maarten.lankhorst@xxxxxxxxxxxxxxx> Cc: "Thomas Hellström" <thomas.hellstrom@xxxxxxxxxxxxxxx> Cc: Jason Ekstrand <jason@xxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 47e07179347a..9d717c8842e2 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1775,6 +1775,7 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb) struct i915_vma *vma = ev->vma; unsigned int flags = ev->flags; struct drm_i915_gem_object *obj = vma->obj; + bool async, write; assert_vma_held(vma); @@ -1806,7 +1807,10 @@ static int eb_move_to_gpu(struct i915_execbuffer *eb) flags &= ~EXEC_OBJECT_ASYNC; } - if (err == 0 && !(flags & EXEC_OBJECT_ASYNC)) { + async = flags & EXEC_OBJECT_ASYNC; + write = flags & EXEC_OBJECT_WRITE; + + if (err == 0 && (!async || write)) { err = i915_request_await_object (eb->request, obj, flags & EXEC_OBJECT_WRITE); } -- 2.32.0