Hi Andrzej, (at first I r-b'ed this patch, but then I wanted to think on some more "simplification" (if it really simplifies things). Please read the review in patch 2 first ) > diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c > index 1cae24349a96fd..80e7fdd5d16427 100644 > --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c > +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c > @@ -565,10 +565,8 @@ static int make_obj_busy(struct drm_i915_gem_object *obj) > goto err_unpin; > } > > - err = i915_request_await_object(rq, vma->obj, true); > - if (err == 0) > - err = i915_vma_move_to_active(vma, rq, > - EXEC_OBJECT_WRITE); > + err = i915_vma_move_to_active(vma, rq, > + EXEC_OBJECT_WRITE); nit: don't need to break the line here. > > i915_request_add(rq); > err_unpin: [...] > @@ -860,9 +854,7 @@ static int read_whitelisted_registers(struct intel_context *ce, > return PTR_ERR(rq); > > i915_vma_lock(results); > - err = i915_request_await_object(rq, results->obj, true); > - if (err == 0) > - err = i915_vma_move_to_active(results, rq, EXEC_OBJECT_WRITE); > + err = i915_vma_move_to_active(results, rq, EXEC_OBJECT_WRITE); > i915_vma_unlock(results); > if (err) > goto err_req; > @@ -944,9 +936,7 @@ static int scrub_whitelisted_registers(struct intel_context *ce) > } > > i915_vma_lock(batch); > - err = i915_request_await_object(rq, batch->obj, false); > - if (err == 0) > - err = i915_vma_move_to_active(batch, rq, 0); > + err = i915_vma_move_to_active(batch, rq, 0); > i915_vma_unlock(batch); The final risult would be: i915_vma_lock(); i915_vma_move_to_active() i915_vma_unlock(); and it's a pattern... as I suggested in patch 2, how about having an: i915_vma_move_to_active_unlocked() and... > if (err) > goto err_request; > diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c > index d6fe94cd0fdb61..b49098f045005e 100644 > --- a/drivers/gpu/drm/i915/gvt/scheduler.c > +++ b/drivers/gpu/drm/i915/gvt/scheduler.c > @@ -570,9 +570,8 @@ static int prepare_shadow_batch_buffer(struct intel_vgpu_workload *workload) > if (gmadr_bytes == 8) > bb->bb_start_cmd_va[2] = 0; > > - ret = i915_vma_move_to_active(bb->vma, > - workload->req, > - 0); > + ret = _i915_vma_move_to_active(bb->vma, workload->req, > + &workload->req->fence, 0); > if (ret) > goto err; > > diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c > index 15816df916c781..19138e99d2fd03 100644 > --- a/drivers/gpu/drm/i915/i915_perf.c > +++ b/drivers/gpu/drm/i915/i915_perf.c > @@ -2015,9 +2015,7 @@ emit_oa_config(struct i915_perf_stream *stream, > goto err_add_request; > } > > - err = i915_request_await_object(rq, vma->obj, 0); > - if (!err) > - err = i915_vma_move_to_active(vma, rq, 0); > + err = i915_vma_move_to_active(vma, rq, 0); > if (err) > goto err_add_request; > > diff --git a/drivers/gpu/drm/i915/i915_vma.h b/drivers/gpu/drm/i915/i915_vma.h > index aecd9c64486b27..47ac5bd1ffcce6 100644 > --- a/drivers/gpu/drm/i915/i915_vma.h > +++ b/drivers/gpu/drm/i915/i915_vma.h > @@ -64,7 +64,11 @@ static inline int __must_check > i915_vma_move_to_active(struct i915_vma *vma, struct i915_request *rq, > unsigned int flags) > { > - return _i915_vma_move_to_active(vma, rq, &rq->fence, flags); > + int err = i915_request_await_object(rq, vma->obj, flags & EXEC_OBJECT_WRITE); > + > + if (!err) > + err = _i915_vma_move_to_active(vma, rq, &rq->fence, flags); > + return err; > } ... this i915_vma_move_to_active() now it's doing more than just moving to active but it's also waiting on dma fences, shall we call it i915_vma_move_to_active_async() or silimar? (I'm not good at giving names). The above would be i915_vma_move_to_active_async_unlocked(). Too long? More complex? We would have something like: i915_vma_move_to_active() /* not used */ i915_vma_move_to_active_unlocked() i915_vma_move_to_active_async() i915_vma_move_to_active_async_unlocked() Anyway as it is looks good, I didn't spot any error in the conversion: Reviewed-by: Andi Shyti <andi.shyti@xxxxxxxxxxxxxxx> Andi [...]