RFC for a render API to support adaptive sync and VRR

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Koenig, Christian
Sent: Tuesday, April 10, 2018 11:43

Am 10.04.2018 um 17:35 schrieb Cyr, Aric:

-----Original Message-----

From: Wentland, Harry

Sent: Tuesday, April 10, 2018 11:08

To: Michel Dänzer <michel at daenzer.net><mailto:michel at daenzer.net>; Koenig, Christian <Christian.Koenig at amd.com><mailto:Christian.Koenig at amd.com>; Manasi Navare

<manasi.d.navare at intel.com><mailto:manasi.d.navare at intel.com>

Cc: Haehnle, Nicolai <Nicolai.Haehnle at amd.com><mailto:Nicolai.Haehnle at amd.com>; Daniel Vetter <daniel.vetter at ffwll.ch><mailto:daniel.vetter at ffwll.ch>; Daenzer, Michel

<Michel.Daenzer at amd.com><mailto:Michel.Daenzer at amd.com>; dri-devel <dri-devel at lists.freedesktop.org><mailto:dri-devel at lists.freedesktop.org>; amd-gfx mailing list <amd-gfx at lists.freedesktop.org><mailto:amd-gfx at lists.freedesktop.org>;

Deucher, Alexander <Alexander.Deucher at amd.com><mailto:Alexander.Deucher at amd.com>; Cyr, Aric <Aric.Cyr at amd.com><mailto:Aric.Cyr at amd.com>; Koo, Anthony <Anthony.Koo at amd.com><mailto:Anthony.Koo at amd.com>

Subject: Re: RFC for a render API to support adaptive sync and VRR



On 2018-04-10 03:37 AM, Michel Dänzer wrote:

On 2018-04-10 08:45 AM, Christian König wrote:

Am 09.04.2018 um 23:45 schrieb Manasi Navare:

Thanks for initiating the discussion. Find my comments below:

On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:

On 2018-04-09 03:56 PM, Harry Wentland wrote:



=== A DRM render API to support variable refresh rates ===



In order to benefit from adaptive sync and VRR userland needs a way

to let us know whether to vary frame timings or to target a

different frame time. These can be provided as atomic properties on

a CRTC:

  * bool    variable_refresh_compatible

  * int    target_frame_duration_ns (nanosecond frame duration)



This gives us the following cases:



variable_refresh_compatible = 0, target_frame_duration_ns = 0

  * drive monitor at timing's normal refresh rate



variable_refresh_compatible = 1, target_frame_duration_ns = 0

  * send new frame to monitor as soon as it's available, if within

min/max of monitor's reported capabilities



variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0

  * send new frame to monitor with the specified

target_frame_duration_ns



When a target_frame_duration_ns or variable_refresh_compatible

cannot be supported the atomic check will reject the commit.



What I would like is two sets of properties on a CRTC or preferably on

a connector:



KMD properties that UMD can query:

* vrr_capable -  This will be an immutable property for exposing

hardware's capability of supporting VRR. This will be set by the

kernel after

reading the EDID mode information and monitor range capabilities.

* vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max

refresh rates supported.

These properties are optional and will be created and attached to the

DP/eDP connector when the connector

is getting intialized.



Mhm, aren't those properties actually per mode and not per CRTC/connector?



Properties that you mentioned above that the UMD can set before kernel

can enable VRR functionality

*bool vrr_enable or vrr_compatible

target_frame_duration_ns



Yeah, that certainly makes sense. But target_frame_duration_ns is a bad

name/semantics.



We should use an absolute timestamp where the frame should be presented,

otherwise you could run into a bunch of trouble with IOCTL restarts or

missed blanks.



Also, a fixed target frame duration isn't suitable even for video

playback, due to drift between the video and audio clocks.



Why?  Even if they drift, you know you want to show your 24Hz video frame for 41.6666ms and adaptive sync can ensure that with reasonable accuracy.

All we're doing is eliminating the need for frame rate converters from the application and offloading that to hardware.



Time-based presentation seems to be the right approach for preventing

micro-stutter in games as well, Croteam developers have been researching

this.





I'm not sure if the driver can ever give a guarantee of the exact time a flip occurs. What we have control over with our HW is frame

duration.



Are Croteam devs trying to predict render times? I'm not sure how that would work. We've had bad experience in the past with

games that try to do framepacing as that's usually not accurate and tends to lead to more problems than benefits.



For gaming, it doesn't make sense nor is it feasible to know how exactly how long a render will take with microsecond precision, very coarse guesses at best.  The point of adaptive sync is that it works *transparently* for the majority of cases, within the capability of the HW and driver.  We don't want to have every game re-write their engine to support this, but we do want the majority to "just work".



The only exception is the video case where an application may want to request a fixed frame duration aligned to the video content.  This requires an explicit interface for the video app, and our proposal is to keep it simple:  app knows how long a frame should be presented for, and we try to honour that.

Well I strongly disagree on that.

See VDPAU for example: https://http.download.nvidia.com/XFree86/vdpau/doxygen/html/group___vdp_presentation_queue.html#ga5bd61ca8ef5d1bc54ca6921aa57f835a

[in]

earliest_presentation_time

The timestamp associated with the surface. The presentation queue will not display the surface until the presentation queue's current time is at least this value.


Especially video players want an interface where they can specify when exactly a frame should show up on the display and then get the feedback when it actually was displayed.

That presentation time doesnâ??t need to come to kernel as such and actually is fine as-is completely decoupled from adaptive sync.  As long as the video player provides the new target_frame_duration_ns on the flip, then the driver/HW will target the correct refresh rate to match the source content.  This simply means that more often than not the video presents will  align very close to the monitorâ??s refresh rate, resulting in a smooth video experience.  For example, if you have 24Hz content, and an adaptive sync monitor with a range of 40-60Hz, once the target_frame_duration_ns is provided, driver can configure the monitor to a fixed refresh rate of 48Hz causing all video presents to be frame-doubled in hardware without further application intervention.

For video games we have a similar situation where a frame is rendered for a certain world time and in the ideal case we would actually display the frame at this world time.

That seems like it would be a poorly written game that flips like that, unless they are explicitly trying to throttle the framerate for some reason.  When a game presents a completed frame, theyâ??d like that to happen as soon as possible.  This is why non-VSYNC modes of flipping exist and many games leverage this.  Adaptive sync gives you the lower latency of immediate flips without the tearing imposed by using non-VSYNC flipping.

I mean we have the guys from Valve on this mailing list so I think we should just get the feedback from them and see what they prefer.

We have thousands of Steam games on other OSes that work great already, but weâ??d certainly be interested in any additional feedback.  My guess is they prefer to â??do nothingâ?? and let driver/HW manage it, otherwise you exempt all existing games from supporting adaptive sync without a rewrite or update.

Regards,
Christian.







-Aric

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20180410/65f801c8/attachment-0001.html>


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux