Quoting Mika Kuoppala (2020-07-02 16:46:22) > Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: > > > Pull the repeated check for the last active request being completed to a > > single spot, when deciding whether or not execlist preemption is > > required. > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > --- > > drivers/gpu/drm/i915/gt/intel_lrc.c | 14 ++++---------- > > 1 file changed, 4 insertions(+), 10 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c > > index 4eb397b0e14d..7bdbfac26d7b 100644 > > --- a/drivers/gpu/drm/i915/gt/intel_lrc.c > > +++ b/drivers/gpu/drm/i915/gt/intel_lrc.c > > @@ -2137,12 +2137,11 @@ static void execlists_dequeue(struct intel_engine_cs *engine) > > */ > > > > if ((last = *active)) { > > - if (need_preempt(engine, last, rb)) { > > - if (i915_request_completed(last)) { > > - tasklet_hi_schedule(&execlists->tasklet); > > - return; > > - } > > + if (i915_request_completed(last) && > > + !list_is_last(&last->sched.link, &engine->active.requests)) > > You return if it is not last? Also the hi schedule is gone. The kick was just causing us to busyspin ahead of the HW CS event. On tracing, it did not seem worth it. If this is the last request, the GPU is now idling and we know that we will not try and lite restore into that request/context. So instead of waiting for the CS event, we go ahead and prepare the next pair of contexts. If it was not the last request, we know there is a context the GPU will switch into, so the urgency is not an issue. However, we have to be careful that we don't issue an ELSP into that second context in case we catch it as it idles (thus hanging the HW). -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx