On 03/05/2017 12:37, Chris Wilson wrote:
If we do not require to perform priority bumping, and we haven't yet submitted the request, we can update its priority in situ and skip acquiring the engine locks -- thus avoiding any contention between us and submit/execute. Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> --- drivers/gpu/drm/i915/intel_lrc.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c index fb0025627676..ca7f28795e2d 100644 --- a/drivers/gpu/drm/i915/intel_lrc.c +++ b/drivers/gpu/drm/i915/intel_lrc.c @@ -767,6 +767,17 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio) list_safe_reset_next(dep, p, dfs_link); } + /* If we didn't need to bump any existing priorites, and we haven't
priorities
+ * yet submitted this request (i..e there is no porential race with
potential
+ * execlists_submit_request()), we can set our own priority and skip + * acquiring the engine locks. + */ + if (request->priotree.priority == INT_MIN) { + request->priotree.priority = prio; + if (stack.dfs_link.next == stack.dfs_link.prev) + return;
Move the assignment of the priority under the if?
+ } + engine = request->engine; spin_lock_irq(&engine->timeline->lock);
Regards, Tvrtko _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx