Quoting Tvrtko Ursulin (2020-05-21 10:10:10) > > On 21/05/2020 09:53, Chris Wilson wrote: > > Since the remove of the no-semaphore boosting, we rely on timeslicing to > > reorder past inter-dependency hogs across the engines. However, we > > require preemption to support timeslicing into user payloads, and not all > > machine support preemption so we do not universally enable timeslicing > > even when it would preempt our own inter-engine semaphores. > > > > Testcase: igt/gem_exec_schedule/semaphore-codependency # bdw/bsw > > Fixes: 18e4af04d218 ("drm/i915: Drop no-semaphore boosting") > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > > Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx> > > --- > > drivers/gpu/drm/i915/gem/i915_gem_context.c | 4 ++-- > > 1 file changed, 2 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c > > index 900ea8b7fc8f..f5d59d18cd5b 100644 > > --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c > > @@ -230,7 +230,7 @@ static void intel_context_set_gem(struct intel_context *ce, > > ce->timeline = intel_timeline_get(ctx->timeline); > > > > if (ctx->sched.priority >= I915_PRIORITY_NORMAL && > > - intel_engine_has_semaphores(ce->engine)) > > + intel_engine_has_timeslices(ce->engine)) > > __set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags); > > } > > > > @@ -1969,7 +1969,7 @@ static int __apply_priority(struct intel_context *ce, void *arg) > > { > > struct i915_gem_context *ctx = arg; > > > > - if (!intel_engine_has_semaphores(ce->engine)) > > + if (!intel_engine_has_timeslices(ce->engine)) > > return 0; > > > > if (ctx->sched.priority >= I915_PRIORITY_NORMAL) > > > > __i915_request_await_execution is okay to keep using semaphores? I think so. Using semaphores there still benefits from synchronising with a master in ELSP[1]. The danger is that it does increase the hangcheck possibility for the bond request, such that a slow request before the master would result in us declaring the bond hung. The question is whether that is worse than executing the bond before the master. I should be able to write a test to demonstrate the hang in the bond. For example, if we do something like: on master engine: submit spin submit master -> submit fence -> submit bond for(;;) submit high priority spin terminate previous spin Hmm. But without preemption... master will execute before we get a chance to submit the high priority spinners. So this will not actually hang. Ok, so this is safer than it seems :) Just need to write that test and execute it on broadwell to verify my claim. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx