Re: [PATCH v3] Documentation: gpu: Mention the requirements for new properties

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 18 Jun 2021 12:58:49 +0300
Laurent Pinchart <laurent.pinchart@xxxxxxxxxxxxxxxx> wrote:

> Hi Pekka,
> 
> On Fri, Jun 18, 2021 at 11:55:38AM +0300, Pekka Paalanen wrote:
> > On Thu, 17 Jun 2021 16:37:14 +0300 Laurent Pinchart wrote:  
> > > On Thu, Jun 17, 2021 at 02:33:11PM +0300, Pekka Paalanen wrote:  
> > > > On Thu, 17 Jun 2021 13:29:48 +0300 Laurent Pinchart wrote:    
> > > > > On Thu, Jun 17, 2021 at 10:27:01AM +0300, Pekka Paalanen wrote:    
> > > > > > On Thu, 17 Jun 2021 00:05:24 +0300 Laurent Pinchart wrote:      
> > > > > > > On Tue, Jun 15, 2021 at 01:16:56PM +0300, Pekka Paalanen wrote:      

...

> This isn't about human perception, but about the ability to check if the
> driver correctly configures the device :-) For driver testing, I don't
> care if the colours look right or not if userspace misconfigures the
> colour pipeline. What I need is to catch regressions when my driver
> messes up the hardware configuration. This particular device processes
> the images in stripes, and it's very easy for a stripe to be one pixel
> off (or actually a fraction of a pixel when the scaler is involved).
> 
> Testing if a device correctly implements colour processing is of course
> also useful, but is a distinct goal. Both need to be considered, when
> stating requirements such as fully documented properties with a VKMS
> software implementation and pixel-perfect tests in IGT.

Right. Your driver testing requirements are different. You know how your
hardware works, so you can build your tests to suit.

What this thread is about is KMS UAPI, and verifying that a KMS
implementation matches the KMS UAPI specification. The UAPI
specification is the only thing I can look at when writing userspace.
As a userspace and compositor developer, I am specifically concerned
about how humans will perceive the final image.

If you use your hardware-specific tests to ensure that your driver
programs the hardware correctly, and IGT tests ensure that the KMS UAPI
is implemented to the KMS UAPI spec, then I would hope that everything
is fine. By reading the KMS UAPI spec I can decide to use or not use
specific KMS features.

...in an ideal world.


> > > One very typical difference between devices is the order of the
> > > processing blocks. By modelling the KMS pipeline as degamma -> ccm ->
> > > gamma, we can accommodate hardware that have any combination of
> > > [1-2] * 1D LUTs + 1 * CCM. Now, throw one 3D LUT into the mix, at  
> > 
> > But you cannot represent pipelines like
> > 1D LUT -> 1D LUT -> CCM
> > because the abstract pipeline just doesn't have the elements for that.
> > OTOH, maybe that ordering does not even make sense to have in hardware?
> > So maybe not all combinations are actually needed.  
> 
> If userspace wanted such a pipeline (I'm not sure why it would), then it
> could just combine the two LUTs in one.

Maybe? You can also combine the 1D LUTs into the 3D LUT in the
middle, too, but the result is not generally the same as using them
separately when the 3D LUT size is limited.

> > > different points in the pipeline depending on the device, and it will
> > > start getting complicated, even if the use case is quite simple and
> > > common. This is getting a bit out of topic, but how would you solve this
> > > one in particular ?  
> > 
> > By defining all the points in the abstract color pipeline where a 3D
> > LUT could exist. Then each point would probably need its own KMS
> > property.  
> 
> And when we'll add the next object ? This can't scale I'm afraid. You'll
> have a spec that tells you that things can be in any order, and may well
> be able to expose the order to userspace, but you won't be able to
> implement a generic userspace that makes use of it.

Are you saying that generic userspace cannot happen? On top of KMS
UAPI, that is.

Why would a generic userspace library API be a more feasible effort?

> BTW, if we want to expose a finer-grained topology of processing blocks,
> I'd recommend looking at the Media Controller API, it was designed just
> for that purpose.

Sure. However, right now we have KMS properties instead. Or maybe they
should just go all unused?

Maybe the rule should be to not add any more KMS properties and tell
people to design something completely new based on Media Controller API
instead?

I've never looked at Media Controller API, so I have no idea how it
would suit display servers.

I very much think that I cannot use processing blocks exposed by
KMS if I cannot have guarantees that they work the same across all
Linux drivers.

> > We already have the KMS pipeline exactly as degamma -> ctm -> gamma and
> > drivers need to respect that order.
> > 
> > If the combinatorial explosion gets out of hand, maybe we need a KMS
> > property to switch to whole another abstract pipeline which defines a
> > different ordering on the same and/or different KMS properties.
> > 
> > From what I've learnt recently, if you have a 3D LUT, you want a 1D LUT
> > on each side of it for memory vs. precision optimization. And after the
> > degamma -> ctm -> gamma pipeline you may want one more ctm for
> > RGB-to-YCbCr conversion. So I have hope that the abstract pipeline with
> > all actually implemented hardware features might not go totally out of
> > hand.  
> 
> On the output of the CRTC, I have, in this order
> 
> - RGB to YUV conversion (can only use presets for BT.601 and BT.709, with
>   limited or full range)
> - 3D LUT
> - 1D LUT
> - YUV to RGB conversion (presets only)
> 
> The RGB to YUV and YUV to RGB conversions can be bypassed. That's it.
> 
> There's also a histogram engine, to allow implementation of dynamic
> gamma correction. This would require userspace to read the histogram
> data for every frame, and update the LUTs accordingly.

Interesting, but almost none of that would be used by the color
management pipeline model I'm familiar with. I wonder what kind of use
cases doing LUTs in YUV have been designed for, and *after* blending,
even.

...

> I really think we'll end up needing device-specific userspace
> components. Given the example above, with histogram calculation and
> dynamic gamma correction, there's no way we'll be able to have a single
> userspace implementation that would support these kind of features for
> all devices. It doesn't have to be solved now, but we probably want to
> start thinking about an API to plug device-specific components in
> compositors. This is the kind of problem that the libcamera project is
> tackling on the camera side, I believe a similar approach will be needed
> for displays too.

To be able to plug device-specific components in compositors, we need a
driver-agnostic API through which compositors can use those device
specific components. Still the same problem, now just in userspace than
at the UAPI level.


Thanks,
pq

Attachment: pgpaadWQxWHN3.pgp
Description: OpenPGP digital signature


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux