RE: [RFC v2 01/22] drm: RFC for Plane Color Hardware Pipeline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Pekka Paalanen <ppaalanen@xxxxxxxxx>
> Sent: Tuesday, October 12, 2021 4:01 PM
> To: Shankar, Uma <uma.shankar@xxxxxxxxx>
> Cc: intel-gfx@xxxxxxxxxxxxxxxxxxxxx; dri-devel@xxxxxxxxxxxxxxxxxxxxx; 
> harry.wentland@xxxxxxx; ville.syrjala@xxxxxxxxxxxxxxx; 
> brian.starkey@xxxxxxx; sebastian@xxxxxxxxxxxxxxxxx; 
> Shashank.Sharma@xxxxxxx
> Subject: Re: [RFC v2 01/22] drm: RFC for Plane Color Hardware Pipeline
> 
> On Tue,  7 Sep 2021 03:08:43 +0530
> Uma Shankar <uma.shankar@xxxxxxxxx> wrote:
> 
> > This is a RFC proposal for plane color hardware blocks.
> > It exposes the property interface to userspace and calls out the 
> > details or interfaces created and the intended purpose.
> >
> > Credits: Ville Syrjälä <ville.syrjala@xxxxxxxxxxxxxxx>
> > Signed-off-by: Uma Shankar <uma.shankar@xxxxxxxxx>
> > ---
> >  Documentation/gpu/rfc/drm_color_pipeline.rst | 167
> > +++++++++++++++++++
> >  1 file changed, 167 insertions(+)
> >  create mode 100644 Documentation/gpu/rfc/drm_color_pipeline.rst
> >
> > diff --git a/Documentation/gpu/rfc/drm_color_pipeline.rst
> > b/Documentation/gpu/rfc/drm_color_pipeline.rst
> > new file mode 100644
> > index 000000000000..0d1ca858783b
> > --- /dev/null
> > +++ b/Documentation/gpu/rfc/drm_color_pipeline.rst
> > @@ -0,0 +1,167 @@
> > +==================================================
> > +Display Color Pipeline: Proposed DRM Properties
> 
> Hi,
> is there a practise of landing proposal documents in the kernel? How 
> does that work, will a kernel tree carry the patch files?
> Or should this document be worded like documentation for an accepted 
> feature, and then the patches either land or don't?
> 

A thread is forked for this query, we will conclude there.

> > +==================================================
> > +
> > +This is how a typical display color hardware pipeline looks like:
> 
> Typical, or should we say that this is the abstract color pipeline that KMS assumes?
> 
> Then drivers map this to pieces of hardware the best they can and 
> reject or do not expose the parts they cannot.

Yeah sure Pekka, I will reword this to be clear.

> > + +-------------------------------------------+
> > + |                RAM                        |
> > + |  +------+    +---------+    +---------+   |
> > + |  | FB 1 |    |  FB 2   |    | FB N    |   |
> > + |  +------+    +---------+    +---------+   |
> > + +-------------------------------------------+
> > +       |  Plane Color Hardware Block |
> > + +--------------------------------------------+
> > + | +---v-----+   +---v-------+   +---v------+ |
> > + | | Plane A |   | Plane B   |   | Plane N  | |
> > + | | DeGamma |   | Degamma   |   | Degamma  | |
> > + | +---+-----+   +---+-------+   +---+------+ |
> > + |     |             |               |        |
> > + | +---v-----+   +---v-------+   +---v------+ |
> > + | |Plane A  |   | Plane B   |   | Plane N  | |
> > + | |CSC/CTM  |   | CSC/CTM   |   | CSC/CTM  | |
> > + | +---+-----+   +----+------+   +----+-----+ |
> > + |     |              |               |       |
> > + | +---v-----+   +----v------+   +----v-----+ |
> > + | | Plane A |   | Plane B   |   | Plane N  | |
> > + | | Gamma   |   | Gamma     |   | Gamma    | |
> > + | +---+-----+   +----+------+   +----+-----+ |
> > + |     |              |               |       |
> > + +--------------------------------------------+
> > ++------v--------------v---------------v-------|
> > +||                                           ||
> > +||           Pipe Blender                    ||
> > ++--------------------+------------------------+
> > +|                    |                        |
> > +|        +-----------v----------+             |
> > +|        |  Pipe DeGamma        |             |
> > +|        |                      |             |
> > +|        +-----------+----------+             |
> > +|                    |            Pipe Color  |
> > +|        +-----------v----------+ Hardware    |
> > +|        |  Pipe CSC/CTM        |             |
> > +|        |                      |             |
> > +|        +-----------+----------+             |
> > +|                    |                        |
> > +|        +-----------v----------+             |
> > +|        |  Pipe Gamma          |             |
> > +|        |                      |             |
> > +|        +-----------+----------+             |
> > +|                    |                        |
> > ++---------------------------------------------+
> > +                     |
> > +                     v
> > +               Pipe Output
> > +
> > +Proposal is to have below properties for a plane:
> > +
> > +* Plane Degamma or Pre-Curve:
> > +	* This will be used to linearize the input framebuffer data.
> > +	* It will apply the reverse of the color transfer function.
> > +	* It can be a degamma curve or OETF for HDR.
> 
> As you want to produce light-linear values, you use EOTF or inverse OETF.
> 
> The term OETF has a built-in assumption that that happens in a camera:
> it takes in light and produces and electrical signal. Lately I have 
> personally started talking about non-linear encoding of color values, 
> since EOTF is often associated with displays if nothing else is said (taking in an electrical signal and producing light).
> 
> So this would be decoding the color values into light-linear color 
> values. That is what an EOTF does, yes, but I feel there is a nuanced 
> difference. A piece of equipment implements an EOTF by turning an 
> electrical signal into light, hence EOTF often refers to specific 
> equipment. You could talk about content EOTF to denote content value 
> encoding, as opposed to output or display EOTF, but that might be 
> confusing if you look at e.g. the diagrams in BT.2100: is it the EOTF or is it the inverse OETF? Is the (inverse?) OOTF included?
> 
> So I try to side-step those questions by talking about encoding.

The idea here is that frame buffer presented to display plane engine will be non-linear.
So output of a media decode should result in content with EOTF applied.
Playback transfer function (EOTF): inverse OETF plus rendering intent gamma. 

To make it linear, we should apply the OETF. Confusion is whether OETF is equivalent to
inverse EOTF, we could check this one out to confirm.

Here since the values for the pre-curve (or degamma as we have called it in past),
accepts programmable lut values which makes it flexible enough to accommodate any values.
This will hold for HDR as well as traditional gamma encoded SRGB data as well.

> > +	* This linear data can be further acted on by the following
> > +	* color hardware blocks in the display hardware pipeline
> 
> I think this and the above description ties the intended use down too 
> much. This is one possible way to use degamma, yes, but there may be 
> others. Particularly if CTM can be replaced with a 3D LUT, then the 
> degamma is more likely a shaper (non-linear adjustment to 3D LUT tap positions).

Yeah agree, this is just one of the usecases. Its just a lut table which can be used for
other purposes as well and is not just limited to a linearization operation. I will update this.

> I would prefer the name pre-curve to underline that this can be 
> whatever one wants it to be, but I understand that people may be more familiar with the name degamma.

I feel pre-curve should be fine but yeah it deviates from naming of legacy crtc/pipe color properties.
May be we can stay with legacy naming with more documentation to have its usecases clearly called out.

> > +
> > +UAPI Name: PLANE_DEGAMMA_MODE
> > +Description: Enum property with values as blob_id's which 
> > +advertizes
> > the
> 
> Is enum with blob id values even a thing?

Yeah we could have. This is a dynamic enum created with blobs. Each entry contains
the data structure to extract the full color capabilities of the hardware. It’s a very
interesting play with blobs (@ville.syrjala@xxxxxxxxxxxxxxx brainchild)

> > +	    possible degamma modes and lut ranges supported by the platform.
> > +	    This  allows userspace to query and get the plane degamma color
> > +	    caps and choose the appropriate degamma mode and create lut values
> > +	    accordingly.
> 
> I agree that some sort of "mode" switch is necessary, and 
> advertisement of capabilities as well.
> 

This enum with blob id's is an interesting way to advertise segmented lut tables.

> > +
> > +UAPI Name: PLANE_DEGAMMA_LUT
> > +Description: Blob property which allows a userspace to provide LUT values
> > +	     to apply degamma curve using the h/w plane degamma processing
> > +	     engine, thereby making the content as linear for further color
> > +	     processing. Userspace gets the size of LUT and precision etc
> > +	     from PLANE_DEGAMA_MODE_PROPERTY
> 
> So all degamma modes will always be some kind of LUT? That may be a 
> bit restrictive, as I understand AMD may have predefined or 
> parameterised curves that are not LUTs. So there should be room for an 
> arbitrary structure of parameters, which can be passed in as a blob 
> id, and the contents defined by the degamma mode.

For Intel's hardware these are luts but yeah AMD hardware seems to have these
as fixed function units. We should think of a way to have this option as well in the
UAPI. We could extend the DEGAMMA_MODE property to have all the info, and
DEGAMMA_LUT_PROPERTY may not be needed based on some of the attributes
passed via DEGAMMA_MODE. This can be one way to have room for both.
@harry.wentland@xxxxxxx thoughts ?

> 
> LUT size, precision, and other details of each degamma mode would be 
> good to expose somehow. I kind of expected those would have been 
> exposed through the above mentioned "enum with blob id values" where 
> each blob content structure is defined by the respective enum value.

Yes, you are spot on here.

> > +
> > +* Plane CTM
> > +	* This is a Property to program the color transformation matrix.
> 
> No mode property here? Is there any hardware with something else than 
> a matrix at this point?

Not that I am aware of.

> Should we assume there will be hardware with something else, and have 
> a CSC mode property with only a single enum value defined so far:
> "matrix"? Or do we say PLANE_CTM is a matrix and if you have something 
> else in hardware, then invent a new property for it?

I think this should be good as we have this for crtc with no one complaining.
There may be hardware with fixed function operation for the CSC but then
its not a matrix and it would be pretty hardware dependent, so we can leave
that from a generic UAPI.

> > +	* This can be used to perform a color space conversion like
> > +	* BT2020 to BT709 or BT601 etc.
> > +	* This block is generally kept after the degamma unit so that
> 
> Not "generally". If blocks can change places, then it becomes 
> intractable for generic userspace to program.

Sure, will drop this wording here. But one open will still remain for userspace,
as to how it gets the pipeline dynamically for a respective hardware.
Currently we have assumed that this would be the logical fixed order of
hardware units.

> > +	* linear data can be fed to it for conversion.
> > +
> > +UAPI Name: PLANE_CTM
> > +Description: Blob property which allows a userspace to provide CTM coefficients
> > +	     to do color space conversion or any other enhancement by doing a
> > +	     matrix multiplication using the h/w CTM processing engine
> > +
> 
> Speaking of color space conversions, we should probably define what 
> happens to out-of-range color values. Converting color into smaller 
> gamut or smaller dynamic range always has the risk of ending up with 
> out-of-range values. I suppose those get simply clipped independently on each color channel, right?

We have standard matrix to convert colorspaces. This is just the property of the colorspace,
I guess clipping will be the only option here (irrespective of hardware)

> Such clipping can change hue, so userspace would be better avoid 
> triggering clipping at all, but we still need to know what would happen with out-of-range values.
> 
> We would also need to know when clipping will happen. If FP16
> (half-float) FB produces out-of-range values and degamma stage is not 
> used, will the CTM see original or clipped values? Or is that 
> something we have to define as hardware-specific?
> 
> Generic userspace will try hard to avoid triggering hardware-specific 
> behaviour, so you can expect such behaviour to go unused.

Here hardware should just operate on the matrix values programmed. This should
be the limitation of the color space conversion operation and hardware may not have
any role to play apart from just honoring the matrix values and producing the
resultant output.

> > +* Plane Gamma or Post-Curve
> > +	* This can be used to perform 2 operations:
> > +		* non-lineralize the framebuffer data. Can be used for
> > +		* non linear blending. It can be a gamma curve or EOTF
> > +		* for HDR.
> > +		* Perform Tone Mapping operation. This is an operation
> > +		* done when blending is done with HDR and SDR content.
> 
> I like this wording better than the wording for pre-curve: "can", not 
> "will". It leaves room for creative use of this processing block.

Ok thanks, will update pre-curve section as well.

> 
> Tone-mapping is needed always when dynamic range differs, so also for 
> HDR to HDR, not just SDR to/from HDR.

Yes correct, will update.

> > +
> > +UAPI Name: PLANE_GAMMA_MODE
> > +Description: Enum property with values as blob_id's which advertizes the
> > +	    possible gamma modes and lut ranges supported by the platform.
> > +	    This  allows userspace to query and get the plane gamma color
> > +	    caps and choose the appropriate gamma mode and create lut values
> > +	    accordingly.
> > +
> > +UAPI Name: PLANE_GAMMA_LUT
> > +Description: Blob property which allows a userspace to provide LUT values
> > +	     to apply gamma curve or perform tone mapping using the h/w plane
> > +	     gamma processing engine, thereby making the content as linear
> > +	     for further color processing. Userspace gets the size of LUT and
> > +	     precision etc from PLANE_GAMA_MODE_PROPERTY
> 
> The same comments here as with DEGAMMA.

Noted.

> > +
> > +This is part of one plane engine. Data from multiple planes will be 
> > +then fed to pipe where it will get blended. There is a similar set 
> > +of properties available at crtc level which acts on this blended data.
> > +
> > +Below is a sample usecase:
> > +
> > +  ┌────────────┐      ┌─────────────┐     ┌─────────────┐
> ┌─────────────┐
> > +  │FB1         │      │Degamma Block│     │ CTM Matrix  │     │ Gamma Block │
> > +  │            ├─────►│Linearize-   ├────►│ BT709 to    ├────►│ SDR to HDR  │
> > +  │BT709 SDR   │      │BT709 inverse│     │ BT2020      │     │ Tone
> Mapping├────────┐
> > +  └────────────┘      └─────────────┘     └─────────────┘
> └─────────────┘        │
> > +                                                                                     │
> > +  ┌────────────┐      ┌─────────────┐     ┌─────────────┐
> ┌─────────────┐        │
> > +  │FB2         │      │Degamma Block│     │ CTM Matrix  │     │ Gamma Block │        │
> > +  │            ├─────►│Linearize-   ├────►│ BT601 to    ├────►│ SDR to HDR
> ├─────┐  │
> > +  │BT601 SDR   │      │BT601 inverse│     │ BT2020      │     │ Tone Mapping│     │  │
> > +  └────────────┘      └─────────────┘     └─────────────┘
> └─────────────┘     │  │
> > +                                                                                  │  │
> > +  ┌────────────┐      ┌─────────────┐     ┌─────────────┐
> ┌─────────────┐     │  │
> > +  │FB3         │      │Degamma Block│     │ CTM Matrix  │     │ Gamma Block │     │  │
> > +  │            ├─────►│Linearize-   ├────►│ NOP (Data in├────►│ NOP (Data
> in├───┐ │  │
> > +  │BT2020 HDR  │      │HDR OETF     │     │ BT2020)     │     │ HDR)        │   │ │  │
> > +  └────────────┘      └─────────────┘     └─────────────┘
> └─────────────┘   │ │  │
> 
> EOTF, not OETF, since it is converting E to O, electrical to optical.

I think media decode would have given a EOTF applied data to be directly consumed by display
sink (in case we have chosen a display pass through). Not sure here though, it’s a bit confusing.

> > +                                                                                
> > + │ │  │
> > +                                                                                
> > + │ │  │
> > +
> > +│ │  │
> >
> +┌──────────────────────────────────────────────────────────────────
> ──
> > +───────────┴─┴──┘
> > +│
> > +│ ┌─────────────┐      ┌─────────────┐      ┌───────────────┐
> > +│ │ CRTC Degamma│      │ CRTC CTM    │      │ CRTC Gamma    │
> > +└─┤ Use to make ├─────►│ Use for any ├─────►│ Use for Tone  ├─────►
> TO Port
> > +  │ data linear │      │ Color Space │      │ Mapping/apply │
> > +  │ after blend │      │ Conversion  │      │ transfer func │
> > +  └─────────────┘      └─────────────┘      └───────────────┘
> 
> Blending does not change whether the data is linear or not. I suppose 
> in this example, CRTC degamma and CTM would be passthrough, and gamma 
> would be the inverse display EOTF for encoding color values into what the monitor expects.

Yeah, will update this to make it clear.

> > +
> > +
> > +This patch series adds properties for plane color features. It adds 
> > +properties for degamma used to linearize data and CSC used for 
> > +gamut conversion. It also includes Gamma support used to again 
> > +non-linearize data as per panel supported color space. These can be 
> > +utilize by user space to convert planes from one format to another, 
> > +one color space to another etc.
> 
> FWIW, this is exactly the structure I have assumed in the Weston CM&HDR work.

This is great to hear that we are aligned wrt how the pipeline should work.

Thanks Pekka for taking time out and providing the feedback.

@harry.wentland@xxxxxxx We can work together and build our design to accommodate
both Intel and AMD's hardware needs. This will also make things generic enough for any
other hardware vendor as well.

Thanks & Regards,
Uma Shankar

> > +
> > +Userspace can take smart blending decisions and utilize these 
> > +hardware supported plane color features to get accurate color 
> > +profile. The same can help in consistent color quality from source 
> > +to panel taking advantage of advanced color features in hardware.
> > +
> > +These patches add the property interfaces and enable helper functions.
> > +This series adds Intel's XE_LPD hw specific plane gamma feature. We 
> > +can build up and add other platform/hardware specific 
> > +implementation on top of this series.
> > +
> > +Credits: Special mention and credits to Ville Syrjala for coming up 
> > +with a design for this feature and inputs. This series is based on 
> > +his original design and idea.
> 
> 
> Thanks,
> pq




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux