Re: [PATCH 1/2] drm/i915: Keep a count of requests waiting for a slot on GPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 22/11/2017 13:35, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2017-11-22 13:31:56)

On 22/11/2017 12:59, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2017-11-22 12:46:21)
From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>

Keep a per-engine number of runnable (waiting for GPU time) requests.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
---
   drivers/gpu/drm/i915/i915_gem_request.c | 5 +++++
   drivers/gpu/drm/i915/intel_engine_cs.c  | 5 +++--
   drivers/gpu/drm/i915/intel_lrc.c        | 1 +
   drivers/gpu/drm/i915/intel_ringbuffer.h | 8 ++++++++
   4 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 7325469ce754..e3c74cafa7d4 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -480,6 +480,9 @@ void __i915_gem_request_submit(struct drm_i915_gem_request *request)
          engine->emit_breadcrumb(request,
                                  request->ring->vaddr + request->postfix);
+ GEM_BUG_ON(engine->queued == 0);
+       engine->queued--;

Ok, so under engine->timeline->lock.

+
          spin_lock(&request->timeline->lock);
          list_move_tail(&request->link, &timeline->requests);
          spin_unlock(&request->timeline->lock);
@@ -525,6 +528,8 @@ void __i915_gem_request_unsubmit(struct drm_i915_gem_request *request)
          timeline = request->timeline;
          GEM_BUG_ON(timeline == engine->timeline);
+ engine->queued++;
+
          spin_lock(&timeline->lock);
          list_move(&request->link, &timeline->requests);
          spin_unlock(&timeline->lock);
diff --git a/drivers/gpu/drm/i915/intel_engine_cs.c b/drivers/gpu/drm/i915/intel_engine_cs.c
index d53680c08cb0..cc9d60130ddd 100644
--- a/drivers/gpu/drm/i915/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/intel_engine_cs.c
@@ -1675,12 +1675,13 @@ void intel_engine_dump(struct intel_engine_cs *engine, struct drm_printer *m)
          u64 addr;
drm_printf(m, "%s\n", engine->name);
-       drm_printf(m, "    current seqno %x, last %x, hangcheck %x [%d ms], inflight %d\n",
+       drm_printf(m, "    current seqno %x, last %x, hangcheck %x [%d ms], inflight %d, queued %d\n",
                     intel_engine_get_seqno(engine),
                     intel_engine_last_submit(engine),
                     engine->hangcheck.seqno,
                     jiffies_to_msecs(jiffies - engine->hangcheck.action_timestamp),
-                  engine->timeline->inflight_seqnos);
+                  engine->timeline->inflight_seqnos,
+                  INTEL_GEN(dev_priv) >= 8 ? engine->queued : -1);

Not gen8 specific, just add engine->queued++ to i9xx_submit_request().

But where to put the decrement, and more importantly, how not make it
lag the reality from the retire worker? :(

The decrement is in __i915_gem_request_submit as before.
So basically it should remain 0, since we aren't keeping a queue of work
for the HW and just submitting into the ringbuffer as soon as we are
ready. (This may not always remain so...) Hence why the (last_seqno -
current_seqno) was so important.

I keep getting lost in these callbacks...

Right, so if we put the increment in i9xx_submit_request, it calls i915_gem_request_submit which immediately decrements it, and as you say it remains at zero then.

So you just want to look at the last_seqno - current_seqno on <gen8. Which will be the totality of submitted stuff so sounds OK. Makes the metric usable on those platforms as well. Sorry for the confusion.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux