On Fri, Aug 09, 2013 at 05:04:36PM -0300, Paulo Zanoni wrote: > From: Paulo Zanoni <paulo.r.zanoni@xxxxxxxxx> > > It seems we've been doing this ever since we started processing the > RPS events on a work queue, on commit "drm/i915: move gen6 rps > handling to workqueue", 4912d04193733a825216b926ffd290fada88ab07. > > The problem is: when we add work to the queue, instead of just masking > the bits we queued and leaving all the others on their current state, > we mask the bits we queued and unmask all the others. This basically > means we'll be unmasking a bunch of interrupts we're not going to > process. And if you look at gen6_pm_rps_work, we unmask back only > GEN6_PM_RPS_EVENTS, which means the bits we unmasked when adding work > to the queue will remain unmasked after we process the queue. > > Notice that even though we unmask those unrelated interrupts, we never > enable them on IER, so they don't fire our interrupt handler, they > just stay there on IIR waiting to be cleared when something else > triggers the interrupt handler. > > So this patch does what seems to make more sense: mask only the bits > we add to the queue, without unmasking anything else, and so we'll > unmask them after we process the queue. > > As a side effect we also have to remove that WARN, because it is not > only making sure we don't mask useful interrupts, it is also making > sure we do unmask useless interrupts! That piece of code should not be > responsible for knowing which bits should be unmasked, so just don't > assert anything, and trust that snb_disable_pm_irq should be doing the > right thing. > > With i915.enable_pc8=1 I was getting ocasional "GEN6_PMIIR is not 0" > error messages due to the fact that we unmask those unrelated > interrupts but don't enable them. > > Note: if bugs start bisecting to this patch, then it probably means > someone was relying on the fact that we unmask everything by accident, > then we should fix gen5_gt_irq_postinstall or whoever needs the > accidentally unmasked interrupts. Or maybe I was just wrong and we > need to revert this patch :) > > Signed-off-by: Paulo Zanoni <paulo.r.zanoni@xxxxxxxxx> I've added another note here explaining that with the addition of VEBOX this little bug started to matter: After the first rps interrupt we'll never mask the VEBOX user interrupt again and so blow through cpu time needlessly when running video workloads using VEBOX. -Daniel > --- > drivers/gpu/drm/i915/i915_irq.c | 11 ++--------- > 1 file changed, 2 insertions(+), 9 deletions(-) > > diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c > index 5b51c43..d9ebfb6 100644 > --- a/drivers/gpu/drm/i915/i915_irq.c > +++ b/drivers/gpu/drm/i915/i915_irq.c > @@ -167,11 +167,6 @@ void snb_disable_pm_irq(struct drm_i915_private *dev_priv, uint32_t mask) > snb_update_pm_irq(dev_priv, mask, 0); > } > > -static void snb_set_pm_irq(struct drm_i915_private *dev_priv, uint32_t val) > -{ > - snb_update_pm_irq(dev_priv, 0xffffffff, ~val); > -} > - > static bool ivb_can_enable_err_int(struct drm_device *dev) > { > struct drm_i915_private *dev_priv = dev->dev_private; > @@ -960,7 +955,7 @@ static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, > > spin_lock(&dev_priv->irq_lock); > dev_priv->rps.pm_iir |= pm_iir & GEN6_PM_RPS_EVENTS; > - snb_set_pm_irq(dev_priv, dev_priv->rps.pm_iir); > + snb_disable_pm_irq(dev_priv, pm_iir & GEN6_PM_RPS_EVENTS); > spin_unlock(&dev_priv->irq_lock); > > queue_work(dev_priv->wq, &dev_priv->rps.work); > @@ -1043,9 +1038,7 @@ static void hsw_pm_irq_handler(struct drm_i915_private *dev_priv, > if (pm_iir & GEN6_PM_RPS_EVENTS) { > spin_lock(&dev_priv->irq_lock); > dev_priv->rps.pm_iir |= pm_iir & GEN6_PM_RPS_EVENTS; > - snb_set_pm_irq(dev_priv, dev_priv->rps.pm_iir); > - /* never want to mask useful interrupts. */ > - WARN_ON(I915_READ_NOTRACE(GEN6_PMIMR) & ~GEN6_PM_RPS_EVENTS); > + snb_disable_pm_irq(dev_priv, pm_iir & GEN6_PM_RPS_EVENTS); > spin_unlock(&dev_priv->irq_lock); > > queue_work(dev_priv->wq, &dev_priv->rps.work); > -- > 1.8.1.2 > > _______________________________________________ > Intel-gfx mailing list > Intel-gfx@xxxxxxxxxxxxxxxxxxxxx > http://lists.freedesktop.org/mailman/listinfo/intel-gfx -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx