On Mon, May 14, 2018 at 10:15:19PM +0100, Chris Wilson wrote: > Quoting Tarun Vyas (2018-05-14 21:49:20) > > intel_pipe_update_start also needs to wait for PSR to idle > > out. Need some minor modifications in psr_wait_for_idle in > > order to reuse it. > > > > Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Signed-off-by: Tarun Vyas <tarun.vyas@xxxxxxxxx> > > --- > > drivers/gpu/drm/i915/intel_psr.c | 29 ++++++++++++++++++----------- > > 1 file changed, 18 insertions(+), 11 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/intel_psr.c b/drivers/gpu/drm/i915/intel_psr.c > > index db27f2faa1de..40aafc0f4513 100644 > > --- a/drivers/gpu/drm/i915/intel_psr.c > > +++ b/drivers/gpu/drm/i915/intel_psr.c > > @@ -889,11 +889,15 @@ static bool psr_wait_for_idle(struct drm_i915_private *dev_priv) > > i915_reg_t reg; > > u32 mask; > > int err; > > + bool wait = false; > > + > > + mutex_lock(&dev_priv->psr.lock); > > > > intel_dp = dev_priv->psr.enabled; > > if (!intel_dp) > > - return false; > > + goto unlock; > > > > + wait = true; > > if (HAS_DDI(dev_priv)) { > > if (dev_priv->psr.psr2_enabled) { > > reg = EDP_PSR2_STATUS; > > @@ -911,15 +915,18 @@ static bool psr_wait_for_idle(struct drm_i915_private *dev_priv) > > mask = VLV_EDP_PSR_IN_TRANS; > > } > > > > +unlock: > > mutex_unlock(&dev_priv->psr.lock); > > > > - err = intel_wait_for_register(dev_priv, reg, mask, 0, 50); > > - if (err) > > - DRM_ERROR("Timed out waiting for PSR Idle for re-enable\n"); > > + if(wait) { > > + err = intel_wait_for_register(dev_priv, reg, mask, 0, 50); > > + if (err) { > > + DRM_ERROR("Timed out waiting for PSR Idle for re-enable\n"); > > + wait = false; > > + } > > + } > > > > - /* After the unlocked wait, verify that PSR is still wanted! */ > > - mutex_lock(&dev_priv->psr.lock); > > - return err == 0 && dev_priv->psr.enabled; > > + return wait; I wanted to avoid taking this additional lock b/c all we need inside intel_pipe_update_start is for PSR to go idle. So can we retain moving it to intel_psr_work ? > > } > > > > static void intel_psr_work(struct work_struct *work) > > @@ -927,7 +934,6 @@ static void intel_psr_work(struct work_struct *work) > > struct drm_i915_private *dev_priv = > > container_of(work, typeof(*dev_priv), psr.work.work); > > > > - mutex_lock(&dev_priv->psr.lock); > > > > /* > > * We have to make sure PSR is ready for re-enable > > @@ -936,14 +942,15 @@ static void intel_psr_work(struct work_struct *work) > > * and be ready for re-enable. > > */ > > if (!psr_wait_for_idle(dev_priv)) > > - goto unlock; > > + return; > > > > - /* > > + /* After the unlocked wait, verify that PSR is still wanted! > > * The delayed work can race with an invalidate hence we need to > > * recheck. Since psr_flush first clears this and then reschedules we > > * won't ever miss a flush when bailing out here. > > */ > > - if (dev_priv->psr.busy_frontbuffer_bits) > > + mutex_lock(&dev_priv->psr.lock); > > + if (dev_priv->psr.enabled && dev_priv->psr.busy_frontbuffer_bits) > > goto unlock; > > I'm not sold on the locking dropping here, doing so inside the wait is > bad enough. (And do we need to there anyway?) > Thanks for the comments Chris. In that case, as suggested by Rodrigo, can we assert that the lock is held, inside psr_wait_for_idle() ? > Since you need to introduce intel_psr_wait_for_idle() anyway, how about > > void intel_psr_wait_for_idle(...) > { > mutex_lock(&i915->psr.lock); > psr_wait_for_idle(); > mutex_unlock(&i915->psr.lock); > } > -Chris >> /* After the unlocked wait, verify that PSR is still wanted! */ >> mutex_lock(&dev_priv->psr.lock); >> return err == 0 && dev_priv->psr.enabled; _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx