[PATCH 5/7] drm/i915: queue hangcheck on reset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 16, 2013 at 10:49:29AM +0200, Daniel Vetter wrote:
> On Wed, Jul 03, 2013 at 05:22:10PM +0300, Mika Kuoppala wrote:
> > From: Mika Kuoppala <mika.kuoppala at linux.intel.com>
> > 
> > Upon resetting the GPU, we begin processing batches once more, so
> > reset the hangcheck timer.
> > 
> > v2: kicking inside reset instead of hangcheck_elapsed and
> >     sane commit message by Chris Wilson
> > 
> > Signed-off-by: Mika Kuoppala <mika.kuoppala at intel.com>
> > ---
> >  drivers/gpu/drm/i915/i915_irq.c |    2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
> > index b0fec7f..1b0e903 100644
> > --- a/drivers/gpu/drm/i915/i915_irq.c
> > +++ b/drivers/gpu/drm/i915/i915_irq.c
> > @@ -1452,6 +1452,8 @@ static void i915_error_work_func(struct work_struct *work)
> >  
> >  			kobject_uevent_env(&dev->primary->kdev.kobj,
> >  					   KOBJ_CHANGE, reset_done_event);
> > +
> > +			i915_queue_hangcheck(dev);
> 
> Hm, what exactly is this for? After reset we don't have any batches
> running right now (since we reset all batches), so I don't understand why
> we need this. And the commit message also doesn't give a reason.

Because our code is a little snafu after reset. We do have batches still
queued, but the rings are incorrectly reset. Instead they should just be
promoted past the failed batch and processing restarted. If you get
extremely fancy, we can no-op out any requests from the hung !default
context to prevent incorrect state leakage. This also likely explains
why we end up with active bo stuck after becoming wedged.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux