Re: [PATCH 1/2] drm/i915: Bump wait-times for the final CS interrupt before parking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes:

> Quoting Mika Kuoppala (2017-10-20 14:11:53)
>> Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes:
>> 
>> > In the idle worker we drop the prolonged GT wakeref used to cover such
>> > essentials as interrupt delivery. (When a CS interrupt arrives, we also
>> > assert that the GT is awake.) However, it turns out that 10ms is not
>> > long enough to be assured that the last CS interrupt has been delivered,
>> > so bump that to 200ms, and move the entirety of that wait to before we
>> > take the struct_mutex to avoid blocking. As this is now a potentially
>> > long wait, restore the earlier behaviour of bailing out early when a new
>> > request arrives.
>> >
>> > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
>> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx>
>> > Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx>
>> > Cc: Imre Deak <imre.deak@xxxxxxxxx>
>> > ---
>> >  drivers/gpu/drm/i915/i915_gem.c | 31 ++++++++++++++++++++-----------
>> >  1 file changed, 20 insertions(+), 11 deletions(-)
>> >
>> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>> > index 026cb52ece0b..d3a638613857 100644
>> > --- a/drivers/gpu/drm/i915/i915_gem.c
>> > +++ b/drivers/gpu/drm/i915/i915_gem.c
>> > @@ -3281,8 +3281,8 @@ i915_gem_idle_work_handler(struct work_struct *work)
>> >  {
>> >       struct drm_i915_private *dev_priv =
>> >               container_of(work, typeof(*dev_priv), gt.idle_work.work);
>> > -     struct drm_device *dev = &dev_priv->drm;
>> >       bool rearm_hangcheck;
>> > +     ktime_t end;
>> >  
>> >       if (!READ_ONCE(dev_priv->gt.awake))
>> >               return;
>> > @@ -3291,14 +3291,22 @@ i915_gem_idle_work_handler(struct work_struct *work)
>> >        * Wait for last execlists context complete, but bail out in case a
>> >        * new request is submitted.
>> >        */
>> > -     wait_for(intel_engines_are_idle(dev_priv), 10);
>> > -     if (READ_ONCE(dev_priv->gt.active_requests))
>> > -             return;
>> > +     end = ktime_add_ms(ktime_get(), 200);
>> > +     do {
>> > +             if (READ_ONCE(dev_priv->gt.active_requests) ||
>> > +                 work_pending(work))
>> > +                     return;
>> > +
>> > +             if (intel_engines_are_idle(dev_priv))
>> > +                     break;
>> > +
>> > +             usleep_range(100, 500);
>> > +     } while (ktime_before(ktime_get(), end));
>> >
>> 
>> Why can't we just wait_for(intel_engines_are_idle(dev_priv, 200)) ?
>
> That return. We don't really want to block the ordered wq for 200ms when
> we already know we won't make progress. (Whilst we are running nothing
> else that wants to use dev_priv->wq can.)

Ok, that makes sense but why don't we have own workqueue for the
idleworker?
-Mika
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux