Re: [PATCH] drm/i915: Avoid uninterruptible spinning in debugfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Tvrtko Ursulin (2019-02-01 17:29:45)
> 
> On 01/02/2019 10:52, Chris Wilson wrote:
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 4067eeaee78a..e47d53e9b634 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -4499,13 +4499,17 @@ int i915_gem_suspend(struct drm_i915_private *i915)
> >       mutex_unlock(&i915->drm.struct_mutex);
> >       i915_reset_flush(i915);
> >   
> > -     drain_delayed_work(&i915->gt.retire_work);
> > -
> >       /*
> >        * As the idle_work is rearming if it detects a race, play safe and
> >        * repeat the flush until it is definitely idle.
> >        */
> > -     drain_delayed_work(&i915->gt.idle_work);
> > +     if (drain_delayed_work_state(&i915->gt.retire_work,
> > +                                  TASK_INTERRUPTIBLE) ||
> > +         drain_delayed_work_state(&i915->gt.idle_work,
> > +                                  TASK_INTERRUPTIBLE)) {
> > +             ret = -EINTR;
> > +             goto err_unlock;
> > +     }
> 
> I'm no suspend expert but it sounds unexpected there would be signals 
> involved in the process. Does it have an userspace component? We don't 
> bother with interruptible mutex on this path either.

You wouldn't believe what users get up to! And they then file a bug
report that they interrupted suspend and it said EINTR...

> > diff --git a/drivers/gpu/drm/i915/i915_utils.h b/drivers/gpu/drm/i915/i915_utils.h
> > index fcc751aa1ea8..6866b85e4239 100644
> > --- a/drivers/gpu/drm/i915/i915_utils.h
> > +++ b/drivers/gpu/drm/i915/i915_utils.h
> > @@ -25,6 +25,8 @@
> >   #ifndef __I915_UTILS_H
> >   #define __I915_UTILS_H
> >   
> > +#include <linux/sched/signal.h>
> > +
> >   #undef WARN_ON
> >   /* Many gcc seem to no see through this and fall over :( */
> >   #if 0
> > @@ -148,12 +150,22 @@ static inline void __list_del_many(struct list_head *head,
> >    * by requeueing itself. Note, that if the worker never cancels itself,
> >    * we will spin forever.
> >    */
> > -static inline void drain_delayed_work(struct delayed_work *dw)
> > +
> > +static inline int drain_delayed_work_state(struct delayed_work *dw, int state)
> >   {
> >       do {
> > -             while (flush_delayed_work(dw))
> > -                     ;
> > +             do {
> > +                     if (signal_pending_state(state, current))
> > +                             return -EINTR;
> > +             } while (flush_delayed_work(dw));
> >       } while (delayed_work_pending(dw));
> > +
> > +     return 0;
> > +}
> > +
> > +static inline void drain_delayed_work(struct delayed_work *dw)
> > +{
> > +     drain_delayed_work_state(dw, 0);
> 
> 0 would be TASK_RUNNING. signal_pending_state bails out in this case so 
> that's good.

The same trick is used in mutex_lock_(interruptible|killable) to map
onto a common handler for mutex_lock, so I don't think its unintentional
;)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux