Re: [RFC PATCH 01/10] drm/doc/rfc: Describe why prescriptive color pipeline is needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2023-11-09 04:20, Pekka Paalanen wrote:
On Wed, 8 Nov 2023 11:27:35 -0500
Harry Wentland <harry.wentland@xxxxxxx> wrote:

On 2023-11-08 11:19, Pekka Paalanen wrote:
On Wed, 8 Nov 2023 09:31:17 -0500
Harry Wentland <harry.wentland@xxxxxxx> wrote:
On 2023-11-08 06:40, Sebastian Wick wrote:
On Wed, Nov 8, 2023 at 11:16 AM Pekka Paalanen <ppaalanen@xxxxxxxxx> wrote:

...

An incremental UAPI development approach is fine by me, meaning that
pipelines might not be complete at first, but I believe that requires
telling userspace whether the driver developers consider the pipeline
complete (no undescribed operations that would significantly change
results from the expected results given the UAPI exposed pipeline).

The prime example of what I would like to know is that if a FB
contains PQ-encoded image and I use a color pipeline to scale that
image up, will the interpolation happen before or after the non-linear
colorop that decodes PQ. That is a significant difference as pointed
out by Joshua.

That's fair and I want to give that to you. My concern stems from
the sentiment that I hear that any pipeline that doesn't explicitly
advertise this is useless. I don't agree there. Let's not let perfect
be the enemy of good.

It's up to the use case. The policy of what is sufficient should reside
in userspace.

What about matching compositor shader composition with KMS?

Can we use that as a rough precision threshold? If userspace implements
the exact same color pipeline as the KMS UAPI describes, then that and
the KMS composited result should be indistinguishable in side-by-side
or alternating visual inspection on any monitor in isolation.

Did this whole effort not start from wanting to off-load things to
display hardware but still maintain visual equivalence to software/GPU
composition?

I agree with you and I want all that as well.

All I'm saying is that every userspace won't have the same policy of
what is sufficient. Just because Weston has a very high threshold
doesn't mean we can't move forward with a workable solution for other
userspace, as long as we don't do something that prevents us from
doing the perfect solution eventually.

And yes, I do want a solution that works for Weston and hear your
comments on what that requires.

I totally agree.

How will that be reflected in the UAPI? If some pipelines are different
from others in correctness/strictness perspective, how will userspace
tell them apart?

Is the current proposal along the lines of: userspace creates a
software pipeline first, and if it cannot map all operations on it to
KMS color pipeline colorops, then the KMS pipeline is not sufficient?

The gist being, if the software pipeline is scaling the image for
example, then it must find a scaling colorop in the KMS pipeline if it
cares about the scaling correctness.


With a simplified model of an imaginary color pipeline I expect this
to look like this:

Color Pipeline 1:
  EOTF Curve - CTM

Color Pipeline 2:
  EOTF Curve - scale - CTM

Realistically both would most likely map to the same HW blocks.

Assuming userspace A and B do the following:
  EOTF Curve - scale - CTM

Userspace A doesn't care about scaling and would only look for:
  EOTF Curve - CTM

and find a match with Color Pipeline 1.

Userspace B cares about scaling and would look for
  EOTF Curve - scale - CTM

and find a match with Color Pipeline 2.

If Color Pipeline 2 is not exposed in the first iteration of the
driver's implementation userspace A would still be able to make
use of it, but userspace B would not, since it requires a defined
scale operation. If the driver then exposes Color Pipeline 2 in a
second iteration userspace B can find a match for what it needs
and make use of it.

Realistically userspace B would not attempt to implement a DRM/KMS
color pipeline implementation unless it knows that there's a driver
that can do what it needs.

Another example is YUV pixel format on an FB that magically turns into
some kind of RGB when sampled, but there is no colorop to tell what
happens. I suppose userspace cannot assume that the lack of colorop
there means an identity YUV->RGB matrix either? How to model
that? I guess the already mentioned pixel format requirements on a
pipeline would help, making it impossible to use a pipeline without a
YUV->RGB colorop on a YUV FB unless the lack of colorop does indeed
mean an identity matrix.


I agree.

The same with sub-sampled YUV which probably needs to always(?) be
expanded into fully sampled at the beginning of a pipeline? Chroma
siting.

This is in addition to the previously discussed userspace policy that
if a KMS color pipeline contains colorops the userspace does not
recognise, userspace probably should not pick that pipeline under any
conditions, because it might do something completely unexpected.


Unless those colorops can be put into bypass.

I think the above could work, but I feel it requires documenting several
examples like scaling that might not exist in the colorop definitions
at first. Otherwise particularly userspace developers might not come to
think of them, whether they care about those operations. I haven't read
the latest doc yet, so I'm not sure if it's already there.


True.

But I'm somewhat reluctant to define things that don't have an
implementation by a driver and an associated IGT test. I've seen
too many definitions (like the drm_connector Colorspace property)
that define a bunch of things but it's unclear whether they are
actually used. Once you have those you can't change their definition
either, even if they are wrong. And you might not find out they are
wrong until you try to implement support end-to-end.

The age-old chicken-and-egg dilemma. It's really problematic to
define things that haven't been validated end-to-end.

There is still a gap though: what if the hardware does something
significant that is not modelled in the KMS pipeline with colorops? For
example, always using a hard-wired sRGB curve to decode before blending
and encode after blending. Or that cursor plane always uses the color
pipeline set on the primary plane. How to stop userspace from being
surprised?


Yeah, it shouldn't. Anything extra that's done should be modelled with
a colorop. But I might be somewhat contradicting myself here because
this would mean that we'd need to model scaling.

Cursors are funky on AMD and I need to think about them more (though
I've been saying that for years :D ). Maybe on AMD we might want a
custom colorop for cursors that basically says "this plane will inherit
colorops from the underlying plane".

Your comments sounded to me like you are letting go of the original
design goals. I'm happy to hear that's not the case. Even if you were,
that is a decision you can make since you are doing the work, and if I
knew you're doing that intentionally I would stop complaining.


Apologies for the misunderstanding. I agree with the original design
goals but I'm also trying to find a minimal workable solution that
allows us to iterate and improve on it going forward.

Harry


Thanks,
pq



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux