On 2018-04-12 01:30 AM, Cyr, Aric wrote: >> From: Michel Dänzer [mailto:michel at daenzer.net] >> Sent: Wednesday, April 11, 2018 05:50 >> On 2018-04-11 08:57 AM, Nicolai Hähnle wrote: >>> On 10.04.2018 23:45, Cyr, Aric wrote: >>>> How does it work fine today given that all kernel seems to know is >>>> 'current' or 'current+1' vsyncs. >>>> Presumably the applications somehow schedule all this just fine. >>>> If this works without variable refresh for 60Hz, will it not work for >>>> a fixed-rate "48Hz" monitor (assuming a 24Hz video)? >>> >>> You're right. I guess a better way to state the point is that it >>> *doesn't* really work today with fixed refresh, but if we're going to >>> introduce a new API, then why not do so in a way that can fix these >>> additional problems as well? >> >> Exactly. With a fixed frame duration, we'll still have fundamentally the >> same issues as we currently do without variable refresh, not making use >> of the full potential of variable refresh. > > I see. Well then, that's makes this sort of orthogonal to the discussion. > If you say that there are no media players on Linux today that can maintain audio/video sync with a 60Hz display, then that problem is much larger than the one we're trying to solve here. > By the way, I don't believe that is a true statement :) Indeed, that's not what we're saying: With fixed refresh rate, audio/video sync cannot be maintained without occasional visual artifacts, due to skipped / repeated frames. >>> How about what I wrote in an earlier mail of having attributes: >>> >>> - target_present_time_ns >>> - hint_frame_time_ns (optional) >>> >>> ... and if a video player set both, the driver could still do the >>> optimizations you've explained? >> >> FWIW, I don't think a property would be a good mechanism for the target >> presentation time. >> >> At least with VDPAU, video players are already explicitly specifying the >> target presentation time, so no changes should be required at that >> level. Don't know about other video APIs. >> >> The X11 Present extension protocol is also prepared for specifying the >> target presentation time already, the support for it just needs to be >> implemented. > > I'm perfectly OK with presentation time-based *API*. I get it from a user mode/app perspective, and that's fine. We need that feedback and would like help defining that portions of the stack. > However, I think it doesn't make as much sense as a *DDI* because it doesn't correspond to any hardware real or logical (i.e. no one would implement it in HW this way) and the industry specs aren't defined that way. Which specs are you referring to? There are at least two specs (VDPAU and VK_GOOGLE_display_timing) which are defined that way. > You can have libdrm or some other usermode component translate your presentation time into a frame duration and schedule it. This cuts both ways. > What's the advantage of having this in kernel besides the fact we lose the intent of the application and could prevent features and optimizations. To me, presentation time is much clearer as intent of the application. It can express all the same things frame duration can, but not the other way around. > When it gets to kernel, I think it is much more elegant for the flip structure > to contain a simple duration that says "hey, show this frame on the screen for > this long". A game cannot know this in advance, can it? Per the Croteam presentation, it depends on when this frame is actually presented (among other things). > 1) We can simplify media players' lives by helping them get really, really close to their content rate, so they wouldn't need any frame rate conversion. At least with VDPAU, media players shouldn't need any changes at all, as they're already explicitly specifying the presentation times. > They'll still need A/V syncing though, and variable refresh cannot solve this I've been trying to explain that it can, perfectly. Can you explain why you think it can't, or ask if something isn't clear about what I've been explaining? > P.S. Thanks for the Croteam link. Interesting, but basically nullified by variable refresh rate displays. According to whom / what? I don't see why it wouldn't apply to variable refresh as well. Without time-based presentation, the game cannot prevent a frame from being presented too early. There is no doubt that the artifacts of not doing this properly will be less noticeable with variable refresh, but that doesn't mean they don't exist. -- Earthling Michel Dänzer | http://www.amd.com Libre software enthusiast | Mesa and X developer