Re: [PATCH v2] RFC drm/i915: Mark runtime_pm as a special class of lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 13, 2018 at 02:29:58PM +0100, Chris Wilson wrote:
> Quoting Daniel Vetter (2018-07-12 13:58:11)
> > On Thu, Jul 12, 2018 at 09:41:07AM +0100, Chris Wilson wrote:
> > > Quoting Chris Wilson (2018-07-12 09:36:33)
> > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> > > > ---
> > > >  drivers/gpu/drm/i915/i915_drv.c         |  5 +++++
> > > >  drivers/gpu/drm/i915/i915_drv.h         |  1 +
> > > >  drivers/gpu/drm/i915/intel_runtime_pm.c | 11 +++++++++++
> > > >  3 files changed, 17 insertions(+)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
> > > > index 3eba3d1ab5b8..2e6d3259f6d0 100644
> > > > --- a/drivers/gpu/drm/i915/i915_drv.c
> > > > +++ b/drivers/gpu/drm/i915/i915_drv.c
> > > > @@ -2603,6 +2603,7 @@ static int intel_runtime_suspend(struct device *kdev)
> > > >         DRM_DEBUG_KMS("Suspending device\n");
> > > >  
> > > >         disable_rpm_wakeref_asserts(dev_priv);
> > > > +       lock_map_acquire(&dev_priv->runtime_pm.lock);
> > > >  
> > > >         /*
> > > >          * We are safe here against re-faults, since the fault handler takes
> > > > @@ -2637,11 +2638,13 @@ static int intel_runtime_suspend(struct device *kdev)
> > > >                 i915_gem_init_swizzling(dev_priv);
> > > >                 i915_gem_restore_fences(dev_priv);
> > > >  
> > > > +               lock_map_release(&dev_priv->runtime_pm.lock);
> > > >                 enable_rpm_wakeref_asserts(dev_priv);
> > > >  
> > > >                 return ret;
> > > >         }
> > > >  
> > > > +       lock_map_release(&dev_priv->runtime_pm.lock);
> > > 
> > > What happens if we don't release the lock here? I think that's what we
> > > want... While suspended we are not allowed to do any action that would
> > > ordinarily require a wakeref. However that scares me for being both
> > > incredibly broad, and that I think lockdep is process centric so doesn't
> > > track locks in this manner?
> > 
> > Lockdep requires that acquire&release are in the same process context. For
> > dependencies crossing boundaries we want a cross-release. And yes I think
> > a cross-release dependency between our rpm_suspend and rpm_get is required
> > for full anotation. But since cross-release is suffering in limbo due to
> > meltdown/spectre that's a way off still :-/
> 
> Bah, we can't do it without cross-release as we pass our wakelock around
> a lot. We start off with an unbalanced lock and never recover. Drat, I
> was hoping this would make verifying the vm.mutex vs runtime_pm more
> convincing.

Yes rpm_get/put is essentially full rwsemaphore which can also move
between process. It's the most evil of locks, and cross-release would
help a lot.

But given how hard a time cross-release with just the minimal waitqueue
annotations has, and how much fun everyone has with making rpm not
deadlock too much, I'm not really holding out for proper cross-release
annotations for rpm in upstream. And we really need them in upstream or
we'll spend 200% of our time fixing everyone else's bugs :-/
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux