On 10/05/2018 04:10 AM, Pekka Paalanen wrote:
Hi,
I have a somewhat of my own view on what would be involved with VRR,
and I'd like to hear what you think of it. Comments inline.
On Tue, 25 Sep 2018 09:51:37 -0400
"Kazlauskas, Nicholas" <nicholas.kazlauskas@xxxxxxx> wrote:
On 09/24/2018 04:26 PM, Ville Syrjälä wrote:
On Mon, Sep 24, 2018 at 03:06:02PM -0400, Kazlauskas, Nicholas wrote:
On 09/24/2018 02:38 PM, Ville Syrjälä wrote:
On Mon, Sep 24, 2018 at 02:15:36PM -0400, Nicholas Kazlauskas wrote:
Variable refresh rate algorithms have typically been enabled only
when the display is covered by a single source of content.
This patch introduces a new default CRTC property that helps
hint to the driver when the CRTC composition is suitable for variable
refresh rate algorithms. Userspace can set this property dynamically
as the composition changes.
Whether the variable refresh rate algorithms are active will still
depend on the CRTC being suitable and the connector being capable
and enabled by the user for variable refresh rate support.
It is worth noting that while the property is atomic it isn't filtered
from legacy userspace queries. This allows for Xorg userspace drivers
to implement support in non-atomic setups.
Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@xxxxxxx>
...
Actually I don't understand why this per-crtc thing is being added at
all. You already have the prop on the connector. Why do we want to
make life more difficult by requiring the thing to be set on both the
crtc and connector?
It doesn't make much sense without both.
The user can globally enable or disable the variable_refresh_enabled on
the connector. This is the user's preference.
What they don't control is the variable_refresh - the content hint that
userspace specifies when the CRTC contents are suitable for enabling
variable refresh features (like adaptive sync). This is userspace's
preference.
By userspace I guess you mean the compositor/display server. I don't
really see why the kernel has to be involved like this in a userspace
policy matter. If the compositor doesn't think vrr is a good idea then
it could simply choose to disable the prop on the connector even when
requested by its clients.
The properties are both a kernel and userspace API so I think it's
important to think about the usecase and API usage for both.
In a DRM driver the variable refresh capability will be determined by
connector type and the display that's connected. The driver needs a way
to track this if it's going to be sending any sort of packet over the
connector. The capability property provides a method for the driver to
do this while also exposing this information to the userspace for querying.
The variable_refresh_enabled property exists because not all users will
want this feature enabled. I think it makes sense to place this
alongside the capability property on the connector because of their
relation and because of the ease of access for the end user in existing
userspace stacks - xrandr makes this easy.
I'm not sure I understood your intention here. Do you mean that you
expect games (e.g. via Mesa) to toggle the connector property via RandR
to tell if they wish for VRR?
The RandR interface is for display configuration utilities, like the UIs
that desktop environments use to configure monitors. It should not be
used by applications like games to modify anything. I believe there are
lots of apps out there that enrage users by abusing RandR, but one
should not base an interface design on such abuse.
If games are not using RandR to communicate the wish for VRR, what do
they use? Does the X11 Present extension have something for it?
What is the application-facing API to request VRR, is it in GLX, EGL,
Vulkan, something else?
Btw. I would not expect xrandr to qualify as proper userspace to
validate new kernel UABI, like new properties. Did you have something
else for the userspace settable connector property?
The usecase I was considering with variable_refresh_enabled was a
graphical UI in a settings app within desktop environment after they've
checked variable_refresh_capable. I was thinking of this as a sort of
"standard" way of configuring whether the user wants VRR enabled but
you're right about xrandr not really demonstrating this.
Every desktop environment already has their own way of dealing with
configuration and persisting settings so this doesn't bring any benefit
in that regard.
I'm almost finished writing up v3s that should address the concerns that
Ville, Daniel and you have raised regarding this. They drop the
"variable_refresh_enabled" property on the connector.
The variable_refresh CRTC property exists for a few reasons.
Userspace needs a method via the atomic API to let a DRM driver know
that it wants variable frame timing to be enabled. The connector enable
property certainly satisfies this condition - but I think there are
other to take into consideration here.
I agree that this CRTC property makes sense.
Whenever I mentioned variable refresh "features", what I really meant
was operating in one of two modes:
(1) Letting the driver and hardware adjust refresh automatically based
on the flip rate on a CRTC from a single application
(2) Setting a fixed frame duration based on the flip rate on a CRTC from
a single application
I wonder if that's too much magic in the kernel... what would be wrong
with simply flipping ASAP when VRR is active?
How will userspace be able to predict coming flip opportunities if the
kernel does so much magic?
The kernel driver doesn't need to do much more than let the hardware
know the variable refresh range. The "magic" is performed by hardware.
Most games would like to render as fast as possible to deliver a more
responsive and smoother image to the user. Many of these are also
resource intensive and won't always be able to render at the fixed
refresh rate of the panel (especially for higher refresh rates like
144Hz). The user will experience stuttering if the game takes too long
to render and misses the vblank window for the flip.
Dynamic VRR adjustment can resolve this problem. The hardware can lower
the refresh rate and increase the vblank window in response to this so
the user doesn't experience stuttering (or latency).
Userspace shouldn't predict anything.
In both cases the important thing to note is that they're both dependent
on how often a CRTC is flipping. There's also the "requirement" for
single application in both cases - if there are multiple applications
issuing flips then the hardware can't determine the correct content
rate. The resulting user experience is going to be negative as the
monitor seemingly adjusts to a random rate.
With the existing kernelspace and userspace stacks a DRM driver can't
reasonably understand whether a single application is flipping or not
and what variable refresh "mode" to operate in - these both depend on
what's happening in userspace.
These factors are decided in userspace when interfacing with a DRM CRTC.
The userspace integration patches I've posted alongside this interface
demonstrate this usage - checks are done against a CRTC to see if the
application fully covers the surface and the automatic adjustment mode
is only enabled for when flips are issued for the CRTC originating from
that application.
Determining which connectors use the CRTC can certainly be done and the
property could be changed there, but I don't think this logically
follows from the API usage described above.
The reasoning behind this being a default property on the CRTC is that I
don't think userspace should need to keep track of what's actually
connected using the CRTC. A user can hotplug displays at will or
enable/disable variable refresh support via their display's OSD.
Capabilities can change and this is only something a driver really needs
to concern themselves with - the driver is what sends the control
packets to the hardware.
Userspace must already track carefully what is connected and what is
driven through which CRTC, otherwise it cannot drive the KMS API
correctly.
I probably could have gone into more detail about some of this in the
cover letter itself for these patchsets. These patches were actually a
connector only interface originally but evolved after developing the
actual implementation.
When both preferences agree, only then will variable refresh features be
enabled.
The reasoning for the split is because not all content is suitable for
variable refresh. Desktop environments, web browsers, etc only typically
flip when needed - which will result in display flickering.
Flickering? What do you mean?
This is a property of how panels work.
The luminance for a panel will vary based on how long the vrefresh is.
Since the vrefresh length is changing as part of VRR you're more likely
to notice the difference in luminance the bigger the difference is.
The difference will be largest when switching from the min vrefresh to
the max vrefresh duration.
Large differences can occur for applications that render on demand like
a web browser (and why you wouldn't want VRR enabled for those). The
hardware would continuously wait for a flip that isn't coming. Then if
the user moves their cursor or the page updates it's going to happen
"randomly" in that window and the hardware will adjust to that.
The userspace integration as part of these patches demonstrates enabling
variable_refresh on the CRTC selectively.
I believe that VRR on X11 would probably depend on these bits:
1. Hardware capability
Whether the monitor, the CRTC hardware, and the kernel driver
all agree that VRR is possible.
2. User preference
A toggle on the desktop environment's display settings
application to allow or disallow VRR. After all, video mode
configuration is here too, and ideally applications should not
mess with the video mode directly.
3. Application preference
Whether the application, e.g. a game, is prepared to deal with
VRR and wishes it to be enabled.
4. X server scenegraph
Which windows happen to be showing on the VRR-capable output
and does that scenegraph allow e.g. flipping straight to client
buffers instead of having the X server copy into framebuffers
of its own.
You mentioned that VRR probably doesn't make sense if there are
multiple windows showing in the same output, so point 4 would cover
that. Also games etc. would aim to hit the direct scanout path to avoid
wasting time in the X server copy, so it seems that requiring flips to
client buffers would be reasonable for enabling VRR. Sounds like that
is exactly how you implemented it in xf86-video-amdgpu, isn't it?
Skimming through your proposal, it seems you have covered 3 out of 4
points. Did you or I miss something? You cannot have points 2 and 3
fight over the control of the same property.
How do you envision VRR to interact with X11 compositing managers?
I presume compositing managers aim for the direct scanout path in the X
server to avoid yet another copy there, right? Does point 4 need to
exclude the Composite Overlay Window (COW) from allowed VRR windows?
Given all the above, I would like to question the choice of trying to
accommodate X11 interfaces directly in the DRM UABI. There seems to be
quite many bits controlled by different processes and VRR should be
active only when they all agree. Therefore, if you want end user
applications to rely on straight connector property pass-through via
RandR, you end up with more properties than strictly necessary,
complicating the UABI.
In other words, I agree with the criticism here by Ville and Daniel
Your interpretation of the X11 stack is mostly the same as mine with
minor differences for the 2-4.
For the sake of the explanation I'm actually going to discuss what's
part of the plan for v3 here since it's a little simpler and should
address your concerns.
There's a large library of existing games and no modification should be
needed for them to support the dynamic VRR adjustment. They all share a
common render stack with GL or Vulkan. So mesa can be leveraged to
"mark" the applications as suitable for VRR - with a window property in
this case. There are less applications that are unsuitable than there
are suitable so a blacklist approach is done here - an opt-out approach
for application preference. This covers 3.
User preference can be handled as part of the DDX driver with something
like an X Option. Dropping the variable_refresh_enabled property in
favor of this works. This covers 2.
For application suitability, the Present extension will be leveraged
here since it covers the logical restrictions (single application, CRTC
covered) for this problem. If the window is flipping via the extension
then it's suitable. This covers 4.
The CRTC VRR enable property would then be set in the DDX driver when
user preference and application suitability match.
Which then leads into 1 - it will only be enabled when the hardware is
capable.
Most compositors function well under this stack. It will vary depending
on compositor support for window unredirection to let the window flip
via the Present extension. Mutter handles this without any configuration
and kwin can be configured to work with these patches. Compton can be
configured to support unredirection as well.
These (among others) are covered as part of the blacklist for the mesa
patches. They do need to be explicitly excluded.
On the other hand, in Wayland architecture, there is only one single
userspace process you need to consider in the DRM UABI: the display
server:
- There is no separate compositing manager process.
- There is no separate application to handle user preference (point 2);
well, not in the sense that you would need to consider it in the DRM
UABI, because it will go through the display server that is part of
the desktop rather than bypassing the desktop.
- There is no DRM property pass-through to end-user applications,
instead there will be dedicated protocol extensions to let the
display server make the final call.
Therefore, it would be fine to have just two bits:
1. Hardware capability
Whether the monitor, the CRTC hardware, and the kernel driver
all agree that VRR is possible.
2. - 4. Whether userspace (i.e. the display server) wants to use VRR or
not. This would be the CRTC property.
I did some prototyping using the atomic API directly and the way I was
utilizing the properties was exactly like this. Even more reason to go
with the two properties (connector capable, CRTC enable) I suppose.
I also believe that it would be useful to expose vmin/vmax to userspace
as properties, so that display servers and apps can plan ahead on when
they will render. I suppose that can be left for later when someone
starts working on userspace taking advantage of it, so that the proper
userspace requirement for new UABI is fulfilled.
Knowing the vmin/vmax could potentially be useful for testing but most
applications shouldn't be trying to predict or adjust rendering based on
these.
I agree with the discussion for this coming later.
Thanks,
pq
Nicholas Kazlauskas
_______________________________________________
amd-gfx mailing list
amd-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/amd-gfx