Quoting Tvrtko Ursulin (2019-05-02 14:51:31) > > On 02/05/2019 14:22, Chris Wilson wrote: > > Quoting Tvrtko Ursulin (2019-05-02 14:19:38) > >> > >> On 01/05/2019 12:45, Chris Wilson wrote: > >>> diff --git a/drivers/gpu/drm/i915/i915_gem_pm.c b/drivers/gpu/drm/i915/i915_gem_pm.c > >>> index 49b0ce594f20..ae91ad7cb31e 100644 > >>> --- a/drivers/gpu/drm/i915/i915_gem_pm.c > >>> +++ b/drivers/gpu/drm/i915/i915_gem_pm.c > >>> @@ -29,12 +29,12 @@ static void i915_gem_park(struct drm_i915_private *i915) > >>> static void idle_work_handler(struct work_struct *work) > >>> { > >>> struct drm_i915_private *i915 = > >>> - container_of(work, typeof(*i915), gem.idle_work.work); > >>> + container_of(work, typeof(*i915), gem.idle_work); > >>> > >>> mutex_lock(&i915->drm.struct_mutex); > >>> > >>> intel_wakeref_lock(&i915->gt.wakeref); > >>> - if (!intel_wakeref_active(&i915->gt.wakeref)) > >>> + if (!intel_wakeref_active(&i915->gt.wakeref) && !work_pending(work)) > >> > >> What is the reason for the !work_pending check? > > > > Just that we are going to be called again, so wait until the next time to > > see if we still need to park. > > When does it get called again? If a whole new cycle of unpark-park > happened before the previous park was able to finish? work_pending() implies that we've done at least one cycle while we waited for the locks and the work is already queued to be rerun. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx