Re: [PATCH] drm/i915: Accurately track when we mark the hardware as idle/busy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 18, 2014 at 07:23:13PM -0300, Paulo Zanoni wrote:
> 2014-02-18 18:34 GMT-03:00 Paulo Zanoni <przanoni@xxxxxxxxx>:
> > 2014-02-18 16:25 GMT-03:00 Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx>:
> >> Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes:
> >>
> >>> We currently call intel_mark_idle() too often, as we do so as a
> >>> side-effect of processing the request queue. However, we the calls to
> >>> intel_mark_idle() are expected to be paired with a call to
> >>> intel_mark_busy() (or else we try to idle the hardware by accessing
> >>> registers that are already disabled). Make the idle/busy tracking
> >>> explicit to prevent the multiple calls.
> >>>
> >>> Reported-by: Paulo Zanoni <paulo.r.zanoni@xxxxxxxxx>
> >>> Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
> >>> Cc: Paulo Zanoni <paulo.r.zanoni@xxxxxxxxx>
> >
> > Thanks!
> >
> > I tested it and this patch + another local patch I have fix the
> > problem that can be reproduced by the "gem-idle" subtest of pm_pc8.c
> > (I still did not commit the subtest, but will do it soon).
> >
> > Also, I guess this patch deprecates dev_priv->pc8.gpu_idle. I had
> > plans to move it to dev_priv->pm.gpu_idle, but now I'll try to kill
> > it.
> >
> >>> ---
> >>>  drivers/gpu/drm/i915/i915_drv.h      |    8 ++++++++
> >>>  drivers/gpu/drm/i915/i915_gem.c      |    4 +---
> >>>  drivers/gpu/drm/i915/intel_display.c |   11 +++++++++++
> >>>  drivers/gpu/drm/i915/intel_drv.h     |    2 +-
> >>>  4 files changed, 21 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> >>> index 00222cc..8441c8a 100644
> >>> --- a/drivers/gpu/drm/i915/i915_drv.h
> >>> +++ b/drivers/gpu/drm/i915/i915_drv.h
> >>> @@ -1134,6 +1134,14 @@ struct i915_gem_mm {
> >>>        */
> >>>       bool interruptible;
> >>>
> >>> +     /**
> >>> +      * Is the GPU currently considered idle, or busy executing userspace
> >>> +      * requests?  Whilst idle, we attempt to power down the hardware and
> >>> +      * display clocks. In order to reduce the effect on performance, there
> >>> +      * is a slight delay before we do so.
> >>> +      */
> >>> +     bool busy;
> 
> Hi
> 
> Also, don't we want to init this to true, since the first thing called
> seems to be intel_mark_idle?

As far as the state being tracked, it is logically .busy = false on
first load, until we process the first batch.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux