Re: [PATCH 05/10] drm/i915: Trim the retired request queue after submitting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 16/01/2018 10:32, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2018-01-16 10:18:55)

On 15/01/2018 21:24, Chris Wilson wrote:
If we submit a request and see that the previous request on this
timeline was already signaled, we first do not need to add the
dependency tracker for that completed request and secondly we know that
we there is then a large backlog in retiring requests affecting this
timeline. Given that we just submitted more work to the HW, now would be
a good time to catch up on those retirements.

How can we be sure there is a large backlog? It may just be that the
submission frequency combined with request duration is just right to
always see even a solitary previous completed request, no?

We always try and retire one old request per new request. To get to the
point where we see an unretired completed fence here implies that we are
allocating faster than retiring, and so have a backlog.


Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>
---
   drivers/gpu/drm/i915/i915_gem_request.c | 5 ++++-
   1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index e6d4857b1f78..6a143099cea1 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -1019,7 +1019,7 @@ void __i915_add_request(struct drm_i915_gem_request *request, bool flush_caches)
prev = i915_gem_active_raw(&timeline->last_request,
                                  &request->i915->drm.struct_mutex);
-     if (prev) {
+     if (prev && !i915_gem_request_completed(prev)) {
               i915_sw_fence_await_sw_fence(&request->submit, &prev->submit,
                                            &request->submitq);

This makes sense.

               if (engine->schedule)
@@ -1055,6 +1055,9 @@ void __i915_add_request(struct drm_i915_gem_request *request, bool flush_caches)
       local_bh_disable();
       i915_sw_fence_commit(&request->submit);
       local_bh_enable(); /* Kick the execlists tasklet if just scheduled */
+
+     if (prev && i915_gem_request_completed(prev))
+             i915_gem_request_retire_upto(prev);

And here I'm a bit surprised that you want to penalize the submission
path with house-keeping - assuming cases when there really is a big
backlog of completed requests. But since it is after the tasklet
kicking, I suppose the effect on submission latency is somewhat
mediated. Unless the caller wants to submit many requests rapidly. Hm..
retire at execbuf time seems to be coming in and out, albeit in a more
controlled fashion with this.

I was surprised myself ;) What I considered the next step here is to
limit the retirements to the client's timeline to avoid having to do
work for others. It's that this comes after the submission of the next
request so we have a few microseconds at least to play with that makes
it seem less obnoxious to me. Plus that it's so unlikely to happen, that
to me suggests that we have fallen fall behind in our alloc/retire
equilibrium that a catch up is justified. And most of the heavy work has
been move from request retirement onto kthreads (object release, context
release etc).

Okay, you convinced me. Well actually I am still uncertain that some diabolical submission pattern could keep triggering this path, but at least it shouldn't be on every submission.

Would you be happy to split up the two parts of this patch?

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux