On 5/7/23 19:14, Dave Airlie wrote: > On Sat, 6 May 2023 at 08:21, Sebastian Wick <sebastian.wick@xxxxxxxxxx> wrote: >> >> On Fri, May 5, 2023 at 10:40 PM Dave Airlie <airlied@xxxxxxxxx> wrote: >>> >>> On Fri, 5 May 2023 at 01:23, Simon Ser <contact@xxxxxxxxxxx> wrote: >>>> >>>> Hi all, >>>> >>>> The goal of this RFC is to expose a generic KMS uAPI to configure the color >>>> pipeline before blending, ie. after a pixel is tapped from a plane's >>>> framebuffer and before it's blended with other planes. With this new uAPI we >>>> aim to reduce the battery life impact of color management and HDR on mobile >>>> devices, to improve performance and to decrease latency by skipping >>>> composition on the 3D engine. This proposal is the result of discussions at >>>> the Red Hat HDR hackfest [1] which took place a few days ago. Engineers >>>> familiar with the AMD, Intel and NVIDIA hardware have participated in the >>>> discussion. >>>> >>>> This proposal takes a prescriptive approach instead of a descriptive approach. >>>> Drivers describe the available hardware blocks in terms of low-level >>>> mathematical operations, then user-space configures each block. We decided >>>> against a descriptive approach where user-space would provide a high-level >>>> description of the colorspace and other parameters: we want to give more >>>> control and flexibility to user-space, e.g. to be able to replicate exactly the >>>> color pipeline with shaders and switch between shaders and KMS pipelines >>>> seamlessly, and to avoid forcing user-space into a particular color management >>>> policy. >>> >>> I'm not 100% sold on the prescriptive here, let's see if someone can >>> get me over the line with some questions later. >>> >>> My feeling is color pipeline hw is not a done deal, and that hw >>> vendors will be revising/evolving/churning the hw blocks for a while >>> longer, as there is no real standards in the area to aim for, all the >>> vendors are mostly just doing whatever gets Windows over the line and >>> keeps hw engineers happy. So I have some concerns here around forwards >>> compatibility and hence the API design. >>> >>> I guess my main concern is if you expose a bunch of hw blocks and >>> someone comes up with a novel new thing, will all existing userspace >>> work, without falling back to shaders? >>> Do we have minimum guarantees on what hardware blocks have to be >>> exposed to build a useable pipeline? >>> If a hardware block goes away in a new silicon revision, do I have to >>> rewrite my compositor? or will it be expected that the kernel will >>> emulate the old pipelines on top of whatever new fancy thing exists. >> >> I think there are two answers to those questions. > > These aren't selling me much better :-) >> >> The first one is that right now KMS already doesn't guarantee that >> every property is supported on all hardware. The guarantee we have is >> that properties that are supported on a piece of hardware on a >> specific kernel will be supported on the same hardware on later >> kernels. The color pipeline is no different here. For a specific piece >> of hardware a newer kernel might only change the pipelines in a >> backwards compatible way and add new pipelines. >> >> So to answer your question: if some hardware with a novel pipeline >> will show up it might not be supported and that's fine. We already >> have cases where some hardware does not support the gamma lut property >> but only the CSC property and that breaks night light because we never >> bothered to write a shader fallback. KMS provides ways to offload work >> but a generic user space always has to provide a fallback and this >> doesn't change. Hardware specific user space on the other hand will >> keep working with the forward compatibility guarantees we want to >> provide. > > In my mind we've screwed up already, isn't a case to be made for > continue down the same path. > > The kernel is meant to be a hardware abstraction layer, not just a > hardware exposure layer. The kernel shouldn't set policy and there are > cases where it can't act as an abstraction layer (like where you need > a compiler), but I'm not sold that this case is one of those yet. I'm > open to being educated here on why it would be. > Thanks for raising these points. When I started out looking at color management I favored the descriptive model. Most other HW vendors I've talked to also tell me that they think about descriptive APIs since that allows HW vendors to map that to whatever their HW supports. Sebastian, Pekka, and others managed to change my mind about this but I still keep having difficult questions within AMD. Sebastian, Pekka, and Jonas have already done a good job to describe our reasoning behind the prescriptive model. It might be helpful to see how different the results of different tone-mapping operators can look: http://helgeseetzen.com/wp-content/uploads/2017/06/HS1.pdf According to my understanding all other platforms that have HDR now have a single compositor. At least that's true for Windows. This allows driver developers to tune their tone-mapping algorithm to match the algorithm used by the compositor when offloading plane composition. This is not true on Linux, where we have a myriad of compositors for good reasons, many of which have a different view of how they want color management to look like. Even if we would come up with an API that lets compositors define their input, output, scaling, and blending space in detail it would still not be feasible to describe the minutia of the tone-mapping algorithms, hence leading to differences in output when KMS color management is used. I am debating whether we need to be serious about a userspace library (or maybe a user-mode driver) to provide an abstraction from the descriptive to the prescriptive model. HW vendors need a way to provide timely support for new HW generations without requiring updates to a large number of compositors. Harry >> >> The second answer is that we want to provide a user space library >> which takes a description of a color pipeline and tries to map that to >> the available KMS color pipelines. If there is a novel color >> operation, adding support in this library would then make it possible >> to offload compatible color pipelines on this new hardware for all >> consumers of the library. Obviously there is no guarantee that >> whatever color pipeline compositors come up with can actually be >> realized on specific hardware but that's just an inherent hardware >> issue. >> > > Why does this library need to be in userspace though? If there's a > library making device dependent decisions, why can't we just make > those device dependent decisions in the kernel? > > This feels like we are trying to go down the Android HWC road, but we > aren't in that business. > > My thoughts would be userspace has to have some way to describe what > it wants anyways, otherwise it does sound like I need to update > mutter, kwin, surfaceflinger, chromeos, gamescope, every time a new HW > device comes out that operates slightly different to previously > generations. This isn't the kernel doing hw abstraction at all, it's > the kernel just opting out of designing interfaces and it isn't > something I'm sold on. > > Dave.