When we submit a request to the GPU, we first take the rpm wakelock, and only release it once the GPU has been idle for a small period of time after all requests have been complete. This means that we are sure no new interrupt can arrive whilst we do not hold the rpm wakelock and so can drop the individual get/put around every single request inside execlists. Note: to close one potential issue we should mark the GPU as busy earlier in __i915_add_request. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/i915_gem.c | 1 - drivers/gpu/drm/i915/intel_lrc.c | 3 --- 2 files changed, 4 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c index 3d31ff11fbef..f94fe2ba4f6f 100644 --- a/drivers/gpu/drm/i915/i915_gem.c +++ b/drivers/gpu/drm/i915/i915_gem.c @@ -2698,7 +2698,6 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv, struct drm_i915_gem_request, execlist_link); list_del(&submit_req->execlist_link); - intel_runtime_pm_put(dev_priv); if (submit_req->ctx != ring->default_context) intel_lr_context_unpin(ring, submit_req->ctx); diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index ce84aa9811ba..8acfcf39e72d 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -525,8 +525,6 @@ static int execlists_context_queue(struct intel_engine_cs *ring, } request->tail = tail; - intel_runtime_pm_get(dev_priv); - spin_lock_irq(&ring->execlist_lock); list_for_each_entry(cursor, &ring->execlist_queue, execlist_link) @@ -953,7 +951,6 @@ void intel_execlists_retire_requests(struct intel_engine_cs *ring) if (ctx_obj && (ctx != ring->default_context)) intel_lr_context_unpin(ring, ctx); - intel_runtime_pm_put(dev_priv); list_del(&req->execlist_link); i915_gem_request_unreference(req); } -- 2.1.4 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx