From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> In GuC mode LRC pinning lifetime depends exclusively on the request liftime. Since that is terminated by the seqno update that opens up a race condition between GPU finishing writing out the context image and the driver unpinning the LRC. To extend the LRC lifetime we will employ a similar approach to what legacy ringbuffer submission does. We will start tracking the last submitted context per engine and keep it pinned until it is replaced by another one. Note that the driver unload path is a bit fragile and could benefit greatly from efforts to unify the legacy and exec list submission code paths. At the moment i915_gem_context_fini has special casing for the two which are potentialy not needed, and also depends on i915_gem_cleanup_ringbuffer running before itself. v2: * Move pinning into engine->emit_request and actually fix the reference/unreference logic. (Chris Wilson) * ring->dev can be NULL on driver unload so use a different route towards it. v3: * Rebase. * Handle the reset path. (Chris Wilson) * Exclude default context from the pinning - it is impossible to get it right before default context special casing in general is eliminated. v4: * Rebased & moved context tracking to intel_logical_ring_advance_and_submit. Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> Issue: VIZ-4277 Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> Cc: Nick Hoath <nicholas.hoath@xxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem_context.c | 10 +++++++--- drivers/gpu/drm/i915/intel_lrc.c | 16 ++++++++++++++-- 2 files changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c index 1a67e07b9e6a..7e6f8c7b6d01 100644 --- a/drivers/gpu/drm/i915/i915_gem_context.c +++ b/drivers/gpu/drm/i915/i915_gem_context.c @@ -324,9 +324,13 @@ err_destroy: static void i915_gem_context_unpin(struct intel_context *ctx, struct intel_engine_cs *engine) { - if (engine->id == RCS && ctx->legacy_hw_ctx.rcs_state) - i915_gem_object_ggtt_unpin(ctx->legacy_hw_ctx.rcs_state); - i915_gem_context_unreference(ctx); + if (i915.enable_execlists) { + intel_lr_context_unpin(ctx, engine); + } else { + if (engine->id == RCS && ctx->legacy_hw_ctx.rcs_state) + i915_gem_object_ggtt_unpin(ctx->legacy_hw_ctx.rcs_state); + i915_gem_context_unreference(ctx); + } } void i915_gem_context_reset(struct drm_device *dev) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index 8eb6e364fefc..0e215ea3f8ab 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -766,6 +766,7 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request) { struct intel_ringbuffer *ringbuf = request->ringbuf; struct drm_i915_private *dev_priv = request->i915; + struct intel_engine_cs *engine = request->ring; intel_logical_ring_advance(ringbuf); request->tail = ringbuf->tail; @@ -780,9 +781,20 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request) intel_logical_ring_emit(ringbuf, MI_NOOP); intel_logical_ring_advance(ringbuf); - if (intel_ring_stopped(request->ring)) + if (intel_ring_stopped(engine)) return 0; + if (engine->last_context != request->ctx) { + if (engine->last_context) + intel_lr_context_unpin(engine->last_context, engine); + if (request->ctx != request->i915->kernel_context) { + intel_lr_context_pin(request->ctx, engine); + engine->last_context = request->ctx; + } else { + engine->last_context = NULL; + } + } + if (dev_priv->guc.execbuf_client) i915_guc_submit(dev_priv->guc.execbuf_client, request); else @@ -1129,7 +1141,7 @@ void intel_lr_context_unpin(struct intel_context *ctx, { struct drm_i915_gem_object *ctx_obj = ctx->engine[engine->id].state; - WARN_ON(!mutex_is_locked(&engine->dev->struct_mutex)); + WARN_ON(!mutex_is_locked(&ctx->i915->dev->struct_mutex)); if (WARN_ON_ONCE(!ctx_obj)) return; -- 1.9.1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx