Re: [PATCH] drm/i915: Remove redundant i915_request_await_object in blit clears

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 15/06/2020 15:30, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2020-06-15 15:09:28)
From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>

One i915_request_await_object is enough and we keep the one under the
object lock so it is final.

At the same time move async clflushing setup under the same locked
section and consolidate common code into a helper function.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
Cc: Matthew Auld <matthew.auld@xxxxxxxxx>
Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Michael J. Ruhl <michael.j.ruhl@xxxxxxxxx>
---
  .../gpu/drm/i915/gem/i915_gem_object_blt.c    | 35 +++++++------------
  1 file changed, 13 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
index f457d7130491..7d8b396e265a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_blt.c
@@ -126,6 +126,17 @@ void intel_emit_vma_release(struct intel_context *ce, struct i915_vma *vma)
         intel_engine_pm_put(ce->engine);
  }
+static int
+move_obj_to_gpu(struct drm_i915_gem_object *obj,
+               struct i915_request *rq,
+               bool write)
+{
+       if (obj->cache_dirty & ~obj->cache_coherent)
+               i915_gem_clflush_object(obj, 0);
+
+       return i915_request_await_object(rq, obj, write);
+}
+
  int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj,
                              struct intel_context *ce,
                              u32 value)
@@ -143,12 +154,6 @@ int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj,
         if (unlikely(err))
                 return err;
- if (obj->cache_dirty & ~obj->cache_coherent) {
-               i915_gem_object_lock(obj);
-               i915_gem_clflush_object(obj, 0);
-               i915_gem_object_unlock(obj);
-       }
-
         batch = intel_emit_vma_fill_blt(ce, vma, value);
         if (IS_ERR(batch)) {
                 err = PTR_ERR(batch);
@@ -165,10 +170,6 @@ int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj,
         if (unlikely(err))
                 goto out_request;
- err = i915_request_await_object(rq, obj, true);
-       if (unlikely(err))
-               goto out_request;
-
         if (ce->engine->emit_init_breadcrumb) {
                 err = ce->engine->emit_init_breadcrumb(rq);
                 if (unlikely(err))
@@ -176,7 +177,7 @@ int i915_gem_object_fill_blt(struct drm_i915_gem_object *obj,
         }
i915_vma_lock(vma);
-       err = i915_request_await_object(rq, vma->obj, true);
+       err = move_obj_to_gpu(vma->obj, rq, true);
         if (err == 0)
                 err = i915_vma_move_to_active(vma, rq, EXEC_OBJECT_WRITE);
         i915_vma_unlock(vma);

Ah, but here it's also the wrong side of init_breadcrumb.

Why it is important to mark the object as active on the failure path? We skip the payload, no?

Regards,

Tvrtko

_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux