Quoting Tvrtko Ursulin (2018-05-15 14:42:02) > > On 13/05/2018 14:46, Chris Wilson wrote: > > When switching to the kernel context, we force the switch to occur after > > all currently active requests (so that we know the GPU won't switch > > immediately away and the kernel context remains current as we work). To > > do so we have to inspect all the timelines and add a fence from the > > active work to queue our switch afterwards. We can use the tracked set > > of active rings to shrink our search for active timelines. > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > > --- > > drivers/gpu/drm/i915/i915_gem_context.c | 23 ++++++++++++----------- > > 1 file changed, 12 insertions(+), 11 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c > > index 33f8a4b3c981..48254483c4a6 100644 > > --- a/drivers/gpu/drm/i915/i915_gem_context.c > > +++ b/drivers/gpu/drm/i915/i915_gem_context.c > > @@ -596,41 +596,42 @@ last_request_on_engine(struct i915_timeline *timeline, > > > > static bool engine_has_idle_kernel_context(struct intel_engine_cs *engine) > > { > > - struct i915_timeline *timeline; > > + struct intel_ring *ring; > > > > - list_for_each_entry(timeline, &engine->i915->gt.timelines, link) { > > - if (last_request_on_engine(timeline, engine)) > > + lockdep_assert_held(&engine->i915->drm.struct_mutex); > > + list_for_each_entry(ring, &engine->i915->gt.active_rings, active_link) { > > + if (last_request_on_engine(ring->timeline, engine)) > > return false; > > } > > Prettier if i915 passed directly to the function? A compromise would be to pull the list out into a local. Yes it is a bit ugly, I just valued the function name more :) -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx