If the vm doesn't request async binding, like for example with the dpt, then we should be able to skip the async path and avoid calling i915_vm_lock_objects() altogether. Currently if we have a moving fence set for the BO(even though it might have signalled), we still take the async patch regardless of the bind_async setting, and then later still end up just doing i915_gem_object_wait_moving_fence() anyway. Alternatively we would need to add dummy scratch object which can be locked, just for the dpt. Suggested-by: Thomas Hellström <thomas.hellstrom@xxxxxxxxxxxxxxx> Signed-off-by: Matthew Auld <matthew.auld@xxxxxxxxx> Cc: Stanislav Lisovskiy <stanislav.lisovskiy@xxxxxxxxx> --- drivers/gpu/drm/i915/i915_vma.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 94fcdb7bd21d..4d4d3659c938 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -1360,8 +1360,7 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, if (flags & PIN_GLOBAL) wakeref = intel_runtime_pm_get(&vma->vm->i915->runtime_pm); - moving = vma->obj ? i915_gem_object_get_moving_fence(vma->obj) : NULL; - if (flags & vma->vm->bind_async_flags || moving) { + if (flags & vma->vm->bind_async_flags) { /* lock VM */ err = i915_vm_lock_objects(vma->vm, ww); if (err) @@ -1375,6 +1374,7 @@ int i915_vma_pin_ww(struct i915_vma *vma, struct i915_gem_ww_ctx *ww, work->vm = i915_vm_get(vma->vm); + moving = vma->obj ? i915_gem_object_get_moving_fence(vma->obj) : NULL; dma_fence_work_chain(&work->base, moving); /* Allocate enough page directories to used PTE */ -- 2.34.1