RE: RFC for a render API to support adaptive sync and VRR

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Koenig, Christian
Sent: Tuesday, April 10, 2018 11:43

Am 10.04.2018 um 17:35 schrieb Cyr, Aric:

-----Original Message-----
From: Wentland, Harry
Sent: Tuesday, April 10, 2018 11:08
To: Michel Dänzer <michel@xxxxxxxxxxx>; Koenig, Christian <Christian.Koenig@xxxxxxx>; Manasi Navare
<manasi.d.navare@xxxxxxxxx>
Cc: Haehnle, Nicolai <Nicolai.Haehnle@xxxxxxx>; Daniel Vetter <daniel.vetter@xxxxxxxx>; Daenzer, Michel
<Michel.Daenzer@xxxxxxx>; dri-devel <dri-devel@xxxxxxxxxxxxxxxxxxxxx>; amd-gfx mailing list <amd-gfx@xxxxxxxxxxxxxxxxxxxxx>;
Deucher, Alexander <Alexander.Deucher@xxxxxxx>; Cyr, Aric <Aric.Cyr@xxxxxxx>; Koo, Anthony <Anthony.Koo@xxxxxxx>
Subject: Re: RFC for a render API to support adaptive sync and VRR
 
On 2018-04-10 03:37 AM, Michel Dänzer wrote:
On 2018-04-10 08:45 AM, Christian König wrote:
Am 09.04.2018 um 23:45 schrieb Manasi Navare:
Thanks for initiating the discussion. Find my comments below:
On Mon, Apr 09, 2018 at 04:00:21PM -0400, Harry Wentland wrote:
On 2018-04-09 03:56 PM, Harry Wentland wrote:
 
=== A DRM render API to support variable refresh rates ===
 
In order to benefit from adaptive sync and VRR userland needs a way
to let us know whether to vary frame timings or to target a
different frame time. These can be provided as atomic properties on
a CRTC:
  * bool    variable_refresh_compatible
  * int    target_frame_duration_ns (nanosecond frame duration)
 
This gives us the following cases:
 
variable_refresh_compatible = 0, target_frame_duration_ns = 0
  * drive monitor at timing's normal refresh rate
 
variable_refresh_compatible = 1, target_frame_duration_ns = 0
  * send new frame to monitor as soon as it's available, if within
min/max of monitor's reported capabilities
 
variable_refresh_compatible = 0/1, target_frame_duration_ns = > 0
  * send new frame to monitor with the specified
target_frame_duration_ns
 
When a target_frame_duration_ns or variable_refresh_compatible
cannot be supported the atomic check will reject the commit.
 
What I would like is two sets of properties on a CRTC or preferably on
a connector:
 
KMD properties that UMD can query:
* vrr_capable -  This will be an immutable property for exposing
hardware's capability of supporting VRR. This will be set by the
kernel after
reading the EDID mode information and monitor range capabilities.
* vrr_vrefresh_max, vrr_vrefresh_min - To expose the min and max
refresh rates supported.
These properties are optional and will be created and attached to the
DP/eDP connector when the connector
is getting intialized.
 
Mhm, aren't those properties actually per mode and not per CRTC/connector?
 
Properties that you mentioned above that the UMD can set before kernel
can enable VRR functionality
*bool vrr_enable or vrr_compatible
target_frame_duration_ns
 
Yeah, that certainly makes sense. But target_frame_duration_ns is a bad
name/semantics.
 
We should use an absolute timestamp where the frame should be presented,
otherwise you could run into a bunch of trouble with IOCTL restarts or
missed blanks.
 
Also, a fixed target frame duration isn't suitable even for video
playback, due to drift between the video and audio clocks.
 
Why?  Even if they drift, you know you want to show your 24Hz video frame for 41.6666ms and adaptive sync can ensure that with reasonable accuracy.  
All we're doing is eliminating the need for frame rate converters from the application and offloading that to hardware.
 
Time-based presentation seems to be the right approach for preventing
micro-stutter in games as well, Croteam developers have been researching
this.
 
 
I'm not sure if the driver can ever give a guarantee of the exact time a flip occurs. What we have control over with our HW is frame
duration.
 
Are Croteam devs trying to predict render times? I'm not sure how that would work. We've had bad experience in the past with
games that try to do framepacing as that's usually not accurate and tends to lead to more problems than benefits.
 
For gaming, it doesn't make sense nor is it feasible to know how exactly how long a render will take with microsecond precision, very coarse guesses at best.  The point of adaptive sync is that it works *transparently* for the majority of cases, within the capability of the HW and driver.  We don't want to have every game re-write their engine to support this, but we do want the majority to "just work".
 
The only exception is the video case where an application may want to request a fixed frame duration aligned to the video content.  This requires an explicit interface for the video app, and our proposal is to keep it simple:  app knows how long a frame should be presented for, and we try to honour that.


Well I strongly disagree on that.

See VDPAU for example: https://http.download.nvidia.com/XFree86/vdpau/doxygen/html/group___vdp_presentation_queue.html#ga5bd61ca8ef5d1bc54ca6921aa57f835a

[in]

earliest_presentation_time

The timestamp associated with the surface. The presentation queue will not display the surface until the presentation queue's current time is at least this value.


Especially video players want an interface where they can specify when exactly a frame should show up on the display and then get the feedback when it actually was displayed.

 

That presentation time doesn’t need to come to kernel as such and actually is fine as-is completely decoupled from adaptive sync.  As long as the video player provides the new target_frame_duration_ns on the flip, then the driver/HW will target the correct refresh rate to match the source content.  This simply means that more often than not the video presents will  align very close to the monitor’s refresh rate, resulting in a smooth video experience.  For example, if you have 24Hz content, and an adaptive sync monitor with a range of 40-60Hz, once the target_frame_duration_ns is provided, driver can configure the monitor to a fixed refresh rate of 48Hz causing all video presents to be frame-doubled in hardware without further application intervention.


For video games we have a similar situation where a frame is rendered for a certain world time and in the ideal case we would actually display the frame at this world time.

 

That seems like it would be a poorly written game that flips like that, unless they are explicitly trying to throttle the framerate for some reason.  When a game presents a completed frame, they’d like that to happen as soon as possible.  This is why non-VSYNC modes of flipping exist and many games leverage this.  Adaptive sync gives you the lower latency of immediate flips without the tearing imposed by using non-VSYNC flipping.


I mean we have the guys from Valve on this mailing list so I think we should just get the feedback from them and see what they prefer.

We have thousands of Steam games on other OSes that work great already, but we’d certainly be interested in any additional feedback.  My guess is they prefer to “do nothing” and let driver/HW manage it, otherwise you exempt all existing games from supporting adaptive sync without a rewrite or update.


Regards,
Christian.


 
 
-Aric

 

_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel

[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux