[RFC] Reduce idle vblank wakeups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 16, 2011 at 07:27:51PM +0100, Mario Kleiner wrote:

> It's not broken hardware, but fast ping-ponging it on and off can
> make the vblank counter and vblank timestamps unreliable for apps
> that need high timing precision, especially for the ones that use
> the OML_sync_control extensions for precise swap scheduling. My
> target application is vision science
>  neuroscience, where (sub-)milliseconds often matter for visual
> stimulation.

I'll admit that I'm struggling to understand the issue here. If the 
vblank counter is incremented at the time of vblank (which isn't the 
case for radeon, it seems, but as far as I can tell is the case for 
Intel) then how does ping-ponging the IRQ matter? 
vblank_disable_and_save() appears to handle this case.

> I think making the vblank off delay driver specific via these
> patches is a good idea. Lowering the timeout to something like a few
> refresh cycles, maybe somewhere between 50 msecs and 100 msecs would
> be also fine by me. I still would like to keep some drm config
> option to disable or override the vblank off delay by users.

Does the timeout serve any purpose other than letting software 
effectively prevent vblanks from being disabled?

> The intel and radeon kms drivers implement everything that's needed
> to make it mostly work. Except for a small race between the cpu and
> gpu in the vblank_disable_and_save() function <http://lxr.free-
> electrons.com/source/drivers/gpu/drm/drm_irq.c#L101> and
> drm_update_vblank_count(). It can cause an off-by-one error when
> reinitializing the drm vblank counter from the gpu's hardware
> counter if the enable/disable function is called at the wrong moment
> while the gpu's scanout is inside the vblank interval, see comments
> in the code. I have some sketchy idea for a patch that could detect
> when the race happens and retry hw counter queries to fix this.
> Without that patch, there's some chance between 0% and 4% of being
> off-by-one.

For Radeon, I'd have thought you could handle this by scheduling an irq 
for the beginning of scanout (avivo has a register for that) and 
delaying the vblank disable until you hit it?

> On current nouveau kms, disabling vblank irqs guarantees you wrong
> vblank counts and wrong vblank timestamps after turning them on
> again, because the kms driver doesn't implement the hook for
> hardware vblank counter query correctly. The drm vblank counter
> misses all counts during the vblank irq off period. Other timing
> related hooks are missing as well. I have a couple of patches queued
> up and some more to come for the ddx and kms driver to mostly fix
> this. NVidia gpu's only have hardware vblank counters for NV-50 and
> later, fixing this for earlier gpu's would require some emulation of
> a hw vblank counter inside the kms driver.

I've no problem with all of this work being case by case.

> Apps that rely on the vblank counter being totally reliable over
> long periods of time currently would be in a bad situation with a
> lowered vblank off delay, but that's probably generally not a good
> assumption. Toolkits like mine, which are more paranoid, currently
> can work fine as long as the off delay is at least a few video
> refresh cycles. I do the following for scheduling a reliably timed
> swap:
> 
> 1. Query current vblank counter current_msc and vblank timestamp
> current_ust.
> 2. Calculate a target vblank count target_msc, based on current_msc,
> current_ust and some target time from usercode.
> 3. Schedule bufferswap for target_msc.
> 
> As long as the vblank off delay is long enough so that vblanks don't
> get turned off between 1. and 3, everything is fine, otherwise bad
> things will happen.
> Keeping a way to override the default off delay would be good to
> allow poor scientists to work around potentially broken drivers or
> gpu's in the future. @Matthew: I'm appealing here to your ex-
> Drosophila biologist heritage ;-)

If vblanks are disabled and then re-enabled between 1 and 3, what's the 
negative outcome?

-- 
Matthew Garrett | mjg59 at srcf.ucam.org


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux