On 01/05/2018 11:46, Chris Wilson wrote:
Due to the latency of the tasklet running from ksoftirqd, by the time we
process the execlist dequeue may be a long time behind the GPU. If the
request was completed when we ran reschedule, we will not have tweaked
its priority, but if it is still listed as being in-flight for dequeue
we will use it as a reference for the rest of the queue, including
requests from its own context which will now be at higher priority. This
can cause us to issue a preempt-to-idle request, even though the request
we want to preempt is already complete.
Reported-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
---
drivers/gpu/drm/i915/intel_lrc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index c6464336963e..d344fbe17997 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -610,7 +610,8 @@ static void execlists_dequeue(struct intel_engine_cs *engine)
if (!execlists_is_active(execlists, EXECLISTS_ACTIVE_HWACK))
goto unlock;
- if (need_preempt(engine, last, execlists->queue_priority)) {
+ if (need_preempt(engine, last, execlists->queue_priority) &&
+ !i915_request_completed(last)) {
inject_preempt_context(engine);
goto unlock;
}
It makes sense on the most rudimentary level - don't preempt a context
which is context saving itself.
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx