Hi, On Tue, 14 Feb 2023 at 16:57, Harry Wentland <harry.wentland@xxxxxxx> wrote: > On 2/14/23 10:49, Sebastian Wick wrote: > From what I've seen recently I am inclined to favor an incremental > approach more. The reason is that any API, or portion thereof, is > useless unless it's enabled full stack. When it isn't it becomes > dead code quickly, or never really works because we overlooked > one thing. The colorspace debacle shows how even something as > simple as extra enum values in KMS APIs shouldn't be added unless > someone in a canonical upstream project actually uses them. I > would argue that such a canonical upstream project actually has > to be a production environment and not something like Weston. Just to chime in as well that it is a real production environment; it's probably actually shipped the most of any compositor by a long way. It doesn't have much place on the desktop, but it does live in planes, trains, automobiles, digital signage, kiosks, STBs/TVs, and about a billion other places you might not have expected. Probably the main factor that joins all these together - apart from not having much desktop-style click-and-drag reconfigurable UI - is that we need to use the hardware pipeline as efficiently as possible, because either we don't have the memory bandwidth to burn like desktops, or we need to minimise it for power/thermal reasons. Given that, we don't really want to paint ourselves into a corner with incremental solutions that mean we can't do fully efficient things later. We're also somewhat undermanned, and we've been using our effort to try to make sure that the full solution - including full colour-managed pathways for things like movie and TV post-prod composition, design, etc - is possible at some point through the full Wayland ecosystem at some point. The X11 experience was so horribly botched that it wasn't really possible without a complete professional setup, and that's something I personally don't want to see. However ... > I could see us getting to a fully new color pipeline API but > the only way to do that is with a development model that supports > it. While upstream needs to be our ultimate goal, a good way > to bring in new APIs and ensure a full-stack implementation is > to develop them in a downstream production kernel, alongside > userspace that makes use of it. Once the implementation is > proven in the downstream repos it can then go upstream. This > brings new challenges, though, as things don't get wide > testing and get out of sync with upstream quickly. The > alternative is the incremental approach. > > We should look at this from a use-case angle, similar to what > the gamescope guys are doing. Small steps, like: > 1) Add HDR10 output (PQ, BT.2020) to the display > 2) Add ability to do sRGB linear blending > 3) Add ability to do sRGB and PQ linear blending > 4) Post-blending 3D LUT > 5) Pre-blending 3D LUT > > At each stage the whole stack needs to work together in production. Personally, I do think at this stage we probably have enough of an understanding to be able to work with an intermediate solution. We just need to think hard about what that intermediate solution is - making sure that we don't end up in the same tangle of impossible semantics like the old 'broadcast RGB' / colorspace / HDR properties which were never thought through - so that it is something we can build on rather than something we have to work around. But it would be really good to make HDR10/HDR10+ media and HDR games work on HDR displays, yeah. Cheers, Daniel