Re: [PATCH] drm/i915: Flush the RPS bottom-half when the GPU idles

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 10, 2015 at 12:02:55AM +0200, Imre Deak wrote:
> On Wed, 2015-12-09 at 20:52 +0000, Chris Wilson wrote:
> > On Wed, Dec 09, 2015 at 07:47:29PM +0200, Imre Deak wrote:
> > > >  void gen6_rps_idle(struct drm_i915_private *dev_priv)
> > > >  {
> > > > -	struct drm_device *dev = dev_priv->dev;
> > > > +	/* Flush our bottom-half so that it does not race with
> > > > us
> > > > +	 * setting the idle frequency and so that it is bounded
> > > > by
> > > > +	 * our rpm wakeref.
> > > > +	 */
> > > > +	flush_work(&dev_priv->rps.work);
> > > 
> > > A (spurious) RPS interrupt could still reschedule the work, so
> > > could we
> > > also explicitly disable the interrupts? Meaning to use
> > > gen6_{disable,enable}_rps_interrupts() in gen6_rps_{idle,busy} and
> > > making sure vlv_set_rps_idle(), gen6_set_rps() would not re-enable
> > > the
> > > interrupts.
> > 
> > Yes, we can do that.
> >  
> > > That would also make it possible to
> > > remove gen6_{disable,enable}_rps_interrupts() from the 
> > > suspend/resume path.
> > 
> > A while back we discussed this, and I've been running with
> > 
> > http://cgit.freedesktop.org/~ickle/linux-2.6/commit/?h=nightly&id=11f
> > f1e6deceb33a5db7be31830abb46c1450755e
> > 
> > which disables the RPS interrupt at idle time (and kills the then
> > superflous
> > suspend path). It works but for a few spurious interrupt warnings.
> 
> If this is about the WARNs in gen6_enable_rps_interrupts() then
> gen6_disable_rps_interrupts() may leave PM IIR bits set,
> but gen6_reset_rps_interrupts() would clear those. The patch you linked
> calls gen6_reset_rps_interrupts(), so no idea how they could still
> happen.

Maybe self-inflicted by a later patch to remove reset-rps-interrupts. I
was under the impression that we didn't actually need to do the reset.
 
> > Though I missed the flush_work(&rps.work) caught in this patch, which
> > may just account for the errors.
> 
> There is cancel_work_sync(&rps.work) in gen6_disable_rps_interrupts(),
> so we wouldn't need the flush_work() imo.

Right.
 
> Btw, I haven't measured, but if the overhead added by all this is
> significant we could use instead rpm_get_noidle() in the rps work too.

I was anticipating the irq locks and synchronize_irq being the worst
offender. However, rps busy/idle don't impact much on GPU intensive
workloads so tend to stay away from the usual measurements, and are so
hopefully insignificant.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux