On Tue, Apr 19, 2016 at 01:02:26PM +0100, Tvrtko Ursulin wrote: > On 19/04/16 07:49, Chris Wilson wrote: > >diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c > >index b0d20af38574..0e55f206e592 100644 > >--- a/drivers/gpu/drm/i915/intel_lrc.c > >+++ b/drivers/gpu/drm/i915/intel_lrc.c > >@@ -708,6 +708,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request > > request->ctx->engine[engine->id].initialised = true; > > } > > > >+ request->pinned_context = request->ctx; > > Add a little bit of comment to the big one above explaining the > possibility of pinned_context being, not the previous, but the > current one before submission? This was here because we used to be able to cancel the context. Now that we always go through intel_logical_ring_advance_and_submit, I've dopped it. It makes a little nervous because we are not clearly tracking the pinned_context now. I also switched to request->previous_context, but I'm undecided as to whether that is a better name (still worrying over lack of pinned context tracking). > > return 0; > > } > > > >@@ -782,12 +783,8 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request) > > intel_logical_ring_emit(ringbuf, MI_NOOP); > > intel_logical_ring_advance(ringbuf); > > > >- if (engine->last_context != request->ctx) { > >- if (engine->last_context) > >- intel_lr_context_unpin(engine->last_context, engine); > >- intel_lr_context_pin(request->ctx, engine); > >- engine->last_context = request->ctx; > >- } > >+ request->pinned_context = engine->last_context; > >+ engine->last_context = request->ctx; > > I am not sure if this is very complicated or just very different > from my approach. Either way after thinking long and hard I cannot > fault it. Looks like it will work. Subtle enough that I gave it a comment. > > if (dev_priv->guc.execbuf_client) > > i915_guc_submit(dev_priv->guc.execbuf_client, request); > >@@ -1009,7 +1006,8 @@ void intel_execlists_retire_requests(struct intel_engine_cs *engine) > > spin_unlock_bh(&engine->execlist_lock); > > > > list_for_each_entry_safe(req, tmp, &retired_list, execlist_link) { > >- intel_lr_context_unpin(req->ctx, engine); > >+ if (req->pinned_context) > >+ intel_lr_context_unpin(req->pinned_context, engine); > > > > list_del(&req->execlist_link); > > i915_gem_request_unreference(req); > > > > Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > > I suppose you did not see any performance effect since you decided > to turn it on for both GuC and execlists? (Assuming vma iomap is in > place.) Context unpinning (with the caching in place) doesn't appear on the profiles for me to worry about. There are easier low hanging fruit in the needless locked atomic instructions and the like we do. (Besides which there are lots of reasons why execlists doesn't yet outperform legacy...) -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx