And kill the comment about it. Queueing work is a barrier type event, no amount of locking will help in ordering things (as long as we queue the work after having updated all relevant data structures). Also, the queue_work works itself as a sufficient memory barrier. Again on the surface this is just a tiny micro-optimization to reduce the hold-time of dev_priv->irq_lock. But the better reason is that it reduces superficial locking and so makes it clearer what we actually need for correctness. Signed-off-by: Daniel Vetter <daniel.vetter at ffwll.ch> --- drivers/gpu/drm/i915/i915_irq.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 29d4c9f..752b98d 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -938,9 +938,9 @@ static void hsw_pm_irq_handler(struct drm_i915_private *dev_priv, I915_WRITE(GEN6_PMIMR, dev_priv->rps.pm_iir); /* never want to mask useful interrupts. (also posting read) */ WARN_ON(I915_READ_NOTRACE(GEN6_PMIMR) & ~GEN6_PM_RPS_EVENTS); - /* TODO: if queue_work is slow, move it out of the spinlock */ - queue_work(dev_priv->wq, &dev_priv->rps.work); spin_unlock(&dev_priv->rps.lock); + + queue_work(dev_priv->wq, &dev_priv->rps.work); } if (pm_iir & PM_VEBOX_USER_INTERRUPT) -- 1.8.1.4