Whenever power wells are disabled like when entering DC5/DC6 all display registers are zeroed. DMC firmware restore them on DC5/DC6 exit. However frame counter register is read-only and DMC cannot restore. So we start facing some funny errors where drm was waiting for vblank 500 and hardware counter got reset and not restored. So wait for vblank was returning 500 vblanks latter, like 8 seconds later. Since we have no visibility when DMC is restoring the registers the quick dirty way is to update the drm layer counter with the latest counter we know. At least we don't keep hundreds vblank behind. FIXME: A proper solution would involve a power domain handling to avoid DC off when a vblank is waited. However due the spin locks at drm vblank handling and the mutex sleeps on the power domain handling side we cannot do this. One alternative would be to create a pre_enable_vblank and post_disable_vblank out of the spin lock regions. But unfortunately this is also not trivial because of many asynchronous drm_vblank_get and drm_vblank_put. Any other idea or help is very welcome. Signed-off-by: Rodrigo Vivi <rodrigo.vivi@xxxxxxxxx> --- drivers/gpu/drm/i915/i915_irq.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c index 25a8937..e67fae4 100644 --- a/drivers/gpu/drm/i915/i915_irq.c +++ b/drivers/gpu/drm/i915/i915_irq.c @@ -2744,6 +2744,20 @@ static int gen8_enable_vblank(struct drm_device *dev, unsigned int pipe) unsigned long irqflags; spin_lock_irqsave(&dev_priv->irq_lock, irqflags); + /* + * DMC firmware can't restore frame counter register that is read-only + * so we need to force the drm layer to know what is our latest + * frame counter. + * FIXME: We might face some funny race condition with DC states + * entering after this restore. Unfortunately a power domain to avoid + * DC off is not possible at this point due to all spin locks drm layer + * does with vblanks. Another idea was to add pre-enable and + * post-disable functions at vblank, but at drm layer there are many + * asynchronous vblank puts that it is not possible with a bigger + * rework. + */ + if (HAS_CSR(dev)) + dev->vblank[pipe].last = g4x_get_vblank_counter(dev, pipe); bdw_enable_pipe_irq(dev_priv, pipe, GEN8_PIPE_VBLANK); spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags); -- 2.4.3 _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx