Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: > In the idle worker we drop the prolonged GT wakeref used to cover such > essentials as interrupt delivery. (When a CS interrupt arrives, we also > assert that the GT is awake.) However, it turns out that 10ms is not > long enough to be assured that the last CS interrupt has been delivered, > so bump that to 200ms, and move the entirety of that wait to before we > take the struct_mutex to avoid blocking. As this is now a potentially > long wait, restore the earlier behaviour of bailing out early when a new > request arrives. > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> > Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx> > Cc: Imre Deak <imre.deak@xxxxxxxxx> > --- > drivers/gpu/drm/i915/i915_gem.c | 31 ++++++++++++++++++++----------- > 1 file changed, 20 insertions(+), 11 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c > index 026cb52ece0b..d3a638613857 100644 > --- a/drivers/gpu/drm/i915/i915_gem.c > +++ b/drivers/gpu/drm/i915/i915_gem.c > @@ -3281,8 +3281,8 @@ i915_gem_idle_work_handler(struct work_struct *work) > { > struct drm_i915_private *dev_priv = > container_of(work, typeof(*dev_priv), gt.idle_work.work); > - struct drm_device *dev = &dev_priv->drm; > bool rearm_hangcheck; > + ktime_t end; > > if (!READ_ONCE(dev_priv->gt.awake)) > return; > @@ -3291,14 +3291,22 @@ i915_gem_idle_work_handler(struct work_struct *work) > * Wait for last execlists context complete, but bail out in case a > * new request is submitted. > */ > - wait_for(intel_engines_are_idle(dev_priv), 10); > - if (READ_ONCE(dev_priv->gt.active_requests)) > - return; > + end = ktime_add_ms(ktime_get(), 200); > + do { > + if (READ_ONCE(dev_priv->gt.active_requests) || > + work_pending(work)) > + return; > + > + if (intel_engines_are_idle(dev_priv)) > + break; > + > + usleep_range(100, 500); > + } while (ktime_before(ktime_get(), end)); > Why can't we just wait_for(intel_engines_are_idle(dev_priv, 200)) ? -Mika > rearm_hangcheck = > cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work); > > - if (!mutex_trylock(&dev->struct_mutex)) { > + if (!mutex_trylock(&dev_priv->drm.struct_mutex)) { > /* Currently busy, come back later */ > mod_delayed_work(dev_priv->wq, > &dev_priv->gt.idle_work, > @@ -3310,13 +3318,14 @@ i915_gem_idle_work_handler(struct work_struct *work) > * New request retired after this work handler started, extend active > * period until next instance of the work. > */ > - if (work_pending(work)) > + if (dev_priv->gt.active_requests || work_pending(work)) > goto out_unlock; > > - if (dev_priv->gt.active_requests) > - goto out_unlock; > - > - if (wait_for(intel_engines_are_idle(dev_priv), 10)) > + /* > + * We are committed now to parking the engines, make sure there > + * will be no more interrupts arriving later. > + */ > + if (!intel_engines_are_idle(dev_priv)) > DRM_ERROR("Timeout waiting for engines to idle\n"); > > intel_engines_mark_idle(dev_priv); > @@ -3330,7 +3339,7 @@ i915_gem_idle_work_handler(struct work_struct *work) > gen6_rps_idle(dev_priv); > intel_runtime_pm_put(dev_priv); > out_unlock: > - mutex_unlock(&dev->struct_mutex); > + mutex_unlock(&dev_priv->drm.struct_mutex); > > out_rearm: > if (rearm_hangcheck) { > -- > 2.15.0.rc1 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx