On Wed, 2015-12-16 at 12:18 +0100, Daniel Vetter wrote: > On Tue, Dec 15, 2015 at 08:10:37PM +0200, Imre Deak wrote: > > In some cases we want to check whether we hold an RPM wakelock > > reference > > for the whole duration of a sequence. To achieve this add a new RPM > > atomic sequence > > counter that we increment any time the wakelock refcount drops to > > zero. > > Check whether the sequence number stays the same during the atomic > > section and that we hold the wakelock at the beginning of the > > section. > > > > Motivated by Chris. > > > > v2-v3: > > - unchanged > > v4: > > - swap the order of atomic_read() and assert_rpm_wakelock_held() in > > assert_rpm_atomic_begin() to avoid race > > > > Signed-off-by: Imre Deak <imre.deak@xxxxxxxxx> > > Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> (v3) > > Can we employ lockdep for some of this? In general lockdep doesn't > mesh > with rpm references, since rpm references can be transferred between > processes. And lockdep doesn't like that. > > But in many cases, and this one here sounds like one, we don't do > that. > And those cases could be annotated/validated with the help of > lockdep. > The bonus of lockdep is that we could then also tell lockdep that the > rpm > suspend/resume functionsa acquire this lock context, which would > catch > bugs like mutex_lock(dev->struct_mutex); before rpm_get(). > > Just a thought really. Yes, as I mentioned I also played with the idea and it could be done by separating the RPM wakelock users into "prolonged" and short time users. Yea, couldn't think of a better name. I explained what I mean in the top three patch of: https://github.com/ideak/linux/commits/rpm-lockdep-annotations But I would still need to do more testing with this, making sure the annotations work as expected, in particular that I call lock_acquire() with the proper parameters. So until that we could use this simpler check that could reveal some issues already now. --Imre > > --- > > drivers/gpu/drm/i915/i915_drv.h | 1 + > > drivers/gpu/drm/i915/intel_drv.h | 17 +++++++++++++++++ > > drivers/gpu/drm/i915/intel_pm.c | 1 + > > drivers/gpu/drm/i915/intel_runtime_pm.c | 3 ++- > > 4 files changed, 21 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_drv.h > > b/drivers/gpu/drm/i915/i915_drv.h > > index 2894716..00ce627 100644 > > --- a/drivers/gpu/drm/i915/i915_drv.h > > +++ b/drivers/gpu/drm/i915/i915_drv.h > > @@ -1603,6 +1603,7 @@ struct skl_wm_level { > > */ > > struct i915_runtime_pm { > > atomic_t wakeref_count; > > + atomic_t atomic_seq; > > bool suspended; > > bool irqs_enabled; > > }; > > diff --git a/drivers/gpu/drm/i915/intel_drv.h > > b/drivers/gpu/drm/i915/intel_drv.h > > index d8e4aca..88d37eb 100644 > > --- a/drivers/gpu/drm/i915/intel_drv.h > > +++ b/drivers/gpu/drm/i915/intel_drv.h > > @@ -1446,6 +1446,23 @@ assert_rpm_wakelock_held(struct > > drm_i915_private *dev_priv) > > "RPM wakelock ref not held during HW access"); > > } > > > > +static inline int > > +assert_rpm_atomic_begin(struct drm_i915_private *dev_priv) > > +{ > > + int seq = atomic_read(&dev_priv->pm.atomic_seq); > > + > > + assert_rpm_wakelock_held(dev_priv); > > + > > + return seq; > > +} > > + > > +static inline void > > +assert_rpm_atomic_end(struct drm_i915_private *dev_priv, int > > begin_seq) > > +{ > > + WARN_ONCE(atomic_read(&dev_priv->pm.atomic_seq) != > > begin_seq, > > + "HW access outside of RPM atomic section\n"); > > +} > > + > > /** > > * disable_rpm_wakeref_asserts - disable the RPM assert checks > > * @dev_priv: i915 device instance > > diff --git a/drivers/gpu/drm/i915/intel_pm.c > > b/drivers/gpu/drm/i915/intel_pm.c > > index 6c08537..6eb9606 100644 > > --- a/drivers/gpu/drm/i915/intel_pm.c > > +++ b/drivers/gpu/drm/i915/intel_pm.c > > @@ -7248,4 +7248,5 @@ void intel_pm_setup(struct drm_device *dev) > > > > dev_priv->pm.suspended = false; > > atomic_set(&dev_priv->pm.wakeref_count, 0); > > + atomic_set(&dev_priv->pm.atomic_seq, 0); > > } > > diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c > > b/drivers/gpu/drm/i915/intel_runtime_pm.c > > index 82c55a9..cee54ea 100644 > > --- a/drivers/gpu/drm/i915/intel_runtime_pm.c > > +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c > > @@ -2283,7 +2283,8 @@ void intel_runtime_pm_put(struct > > drm_i915_private *dev_priv) > > struct device *device = &dev->pdev->dev; > > > > assert_rpm_wakelock_held(dev_priv); > > - atomic_dec(&dev_priv->pm.wakeref_count); > > + if (atomic_dec_and_test(&dev_priv->pm.wakeref_count)) > > + atomic_inc(&dev_priv->pm.atomic_seq); > > > > pm_runtime_mark_last_busy(device); > > pm_runtime_put_autosuspend(device); > > -- > > 2.5.0 > > > > _______________________________________________ > > Intel-gfx mailing list > > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > > http://lists.freedesktop.org/mailman/listinfo/intel-gfx > _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx