On Mon, 8 May 2023 09:14:18 +1000 Dave Airlie <airlied@xxxxxxxxx> wrote: > On Sat, 6 May 2023 at 08:21, Sebastian Wick <sebastian.wick@xxxxxxxxxx> wrote: > > > > On Fri, May 5, 2023 at 10:40 PM Dave Airlie <airlied@xxxxxxxxx> wrote: > > > > > > On Fri, 5 May 2023 at 01:23, Simon Ser <contact@xxxxxxxxxxx> wrote: > > > > > > > > Hi all, > > > > > > > > The goal of this RFC is to expose a generic KMS uAPI to configure the color > > > > pipeline before blending, ie. after a pixel is tapped from a plane's > > > > framebuffer and before it's blended with other planes. With this new uAPI we > > > > aim to reduce the battery life impact of color management and HDR on mobile > > > > devices, to improve performance and to decrease latency by skipping > > > > composition on the 3D engine. This proposal is the result of discussions at > > > > the Red Hat HDR hackfest [1] which took place a few days ago. Engineers > > > > familiar with the AMD, Intel and NVIDIA hardware have participated in the > > > > discussion. > > > > > > > > This proposal takes a prescriptive approach instead of a descriptive approach. > > > > Drivers describe the available hardware blocks in terms of low-level > > > > mathematical operations, then user-space configures each block. We decided > > > > against a descriptive approach where user-space would provide a high-level > > > > description of the colorspace and other parameters: we want to give more > > > > control and flexibility to user-space, e.g. to be able to replicate exactly the > > > > color pipeline with shaders and switch between shaders and KMS pipelines > > > > seamlessly, and to avoid forcing user-space into a particular color management > > > > policy. > > > > > > I'm not 100% sold on the prescriptive here, let's see if someone can > > > get me over the line with some questions later. Hi Dave, generic userspace must always be able to fall back to GPU shaders or something else, when a window suddenly stops being eligible for a KMS plane. That can happen due to a simple window re-stacking operation for example, maybe a notification pops up temporarily. Hence, it is highly desirable to be able to implement the exact same algorithm in shaders as the display hardware does, in order to not cause visible glitches on screen. One way to do that is to have a prescriptive UAPI design. Userspace decides what algorithms to use for color processing, and the UAPI simply offers a way to implement those well-defined mathematical operations. An alternative could be that the UAPI gives userspace back shader programs that implement the same as what the hardware does, but... ugh. Choosing the algorithm is policy. Userspace must be in control of policy, right? Therefore a descriptive UAPI is simply not possible. There is no single correct algorithm for these things, there are many flavors, more and less correct, different quality/performance trade-offs, and even just matters of taste. Sometimes even end user taste, that might need to be configurable. Applications have built-in assumptions too, and they vary. To clarify, a descriptive UAPI is a design where userspace tells the kernel "my source 1 is sRGB, my source 2 is BT.2100/PQ YCbCr 4:2:0 with blahblahblah metadata, do whatever to display those on KMS planes simultaneously". As I mentioned, there is not just one answer to that, and we should also allow for innovation in the algorithms by everyone, not just hardware designers. A prescriptive UAPI is where we communicate mathematical operations without any semantics. It is inherently free of policy in the kernel. > > > > > > My feeling is color pipeline hw is not a done deal, and that hw > > > vendors will be revising/evolving/churning the hw blocks for a while > > > longer, as there is no real standards in the area to aim for, all the > > > vendors are mostly just doing whatever gets Windows over the line and > > > keeps hw engineers happy. So I have some concerns here around forwards > > > compatibility and hence the API design. > > > > > > I guess my main concern is if you expose a bunch of hw blocks and > > > someone comes up with a novel new thing, will all existing userspace > > > work, without falling back to shaders? > > > Do we have minimum guarantees on what hardware blocks have to be > > > exposed to build a useable pipeline? > > > If a hardware block goes away in a new silicon revision, do I have to > > > rewrite my compositor? or will it be expected that the kernel will > > > emulate the old pipelines on top of whatever new fancy thing exists. > > > > I think there are two answers to those questions. > > These aren't selling me much better :-) > > > > The first one is that right now KMS already doesn't guarantee that > > every property is supported on all hardware. The guarantee we have is > > that properties that are supported on a piece of hardware on a > > specific kernel will be supported on the same hardware on later > > kernels. The color pipeline is no different here. For a specific piece > > of hardware a newer kernel might only change the pipelines in a > > backwards compatible way and add new pipelines. > > > > So to answer your question: if some hardware with a novel pipeline > > will show up it might not be supported and that's fine. We already > > have cases where some hardware does not support the gamma lut property > > but only the CSC property and that breaks night light because we never > > bothered to write a shader fallback. KMS provides ways to offload work > > but a generic user space always has to provide a fallback and this > > doesn't change. Hardware specific user space on the other hand will > > keep working with the forward compatibility guarantees we want to > > provide. > > In my mind we've screwed up already, isn't a case to be made for > continue down the same path. > > The kernel is meant to be a hardware abstraction layer, not just a > hardware exposure layer. The kernel shouldn't set policy and there are > cases where it can't act as an abstraction layer (like where you need > a compiler), but I'm not sold that this case is one of those yet. I'm > open to being educated here on why it would be. If the display hardware cannot do an operation that userspace needs, would you have the kernel internally have a GPU shader to achieve that operation? It could be kernel-build-time compiled... How would you implement all of CRTC properties DEGAMMA, CTM and GAMMA in a kernel driver when the hardware simply does not have those operations? Why would it be a screw-up if an API cannot deliver what hardware cannot do? > > > > The second answer is that we want to provide a user space library > > which takes a description of a color pipeline and tries to map that to > > the available KMS color pipelines. If there is a novel color > > operation, adding support in this library would then make it possible > > to offload compatible color pipelines on this new hardware for all > > consumers of the library. Obviously there is no guarantee that > > whatever color pipeline compositors come up with can actually be > > realized on specific hardware but that's just an inherent hardware > > issue. > > > > Why does this library need to be in userspace though? If there's a > library making device dependent decisions, why can't we just make > those device dependent decisions in the kernel? What happened to the idea "put it in the kernel only if it has to be in the kernel"? Userspace is much easier to work with, faster to release, faster to fix, easier to innovate, and so on. Kernel UAPI cannot be deprecated, which means the kernel implementation can never get simpler. A userspace library OTOH can be left in maintenance mode and new incompatible major version can be started, maybe as another project, with no burden of having to keep the old stuff working, because the old stuff will not need to be touched and it just keeps working same as ever. There can even be several differently designed userspace libraries for projects to choose from. We have much less an idea of what such library API should look like than the kernel UAPI proposed here. There is no Khronos committee here. I mean, Khronos tried, right? OpenWF? The aim is to be able to take advantage of hardware to the fullest, which excludes the possibility of hidden copies in the kernel, which excludes GPU fallbacks in the kernel, so it's natural the kernel UAPI design aims to expose hardware the way it is. > This feels like we are trying to go down the Android HWC road, but we > aren't in that business. > > My thoughts would be userspace has to have some way to describe what > it wants anyways, otherwise it does sound like I need to update > mutter, kwin, surfaceflinger, chromeos, gamescope, every time a new HW > device comes out that operates slightly different to previously > generations. This isn't the kernel doing hw abstraction at all, it's > the kernel just opting out of designing interfaces and it isn't > something I'm sold on. Userspace, that does not want to be hardware-specific, always has a fallback path, usually through Vulkan or OpenGL composition. Even hardware-specific userspace will never regress due to a kernel update. You have to swap out hardware in order to potentially "regress". I never thought that swapping out hardware causing something to stop working when it has never worked on that hardware ever, could be seen as a kernel regression. Have the rules changed? Thanks, pq
Attachment:
pgpKWsJhro0QM.pgp
Description: OpenPGP digital signature