Re: [RFC PATCH v3 1/6] drm/doc: Color Management and HDR10 RFC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2021-09-23 9:40 a.m., Harry Wentland wrote:

On 2021-09-23 04:01, Pekka Paalanen wrote:
On Wed, 22 Sep 2021 11:06:53 -0400
Harry Wentland <harry.wentland@xxxxxxx> wrote:

On 2021-09-20 20:14, Harry Wentland wrote:
On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400
Harry Wentland <harry.wentland@xxxxxxx> wrote:
<snip>

+If a display's maximum HDR white level is correctly reported it is trivial
+to convert between all of the above representations of SDR white level. If
+it is not, defining SDR luminance as a nits value, or a ratio vs a fixed
+nits value is preferred, assuming we are blending in linear space.
+
+It is our experience that many HDR displays do not report maximum white
+level correctly
Which value do you refer to as "maximum white", and how did you measure
it?
Good question. I haven't played with those displays myself but I'll try to
find out a bit more background behind this statement.

Some TVs report the EOTF but not the luminance values.
For an example edid-code capture of my eDP HDR panel:

   HDR Static Metadata Data Block:
     Electro optical transfer functions:
       Traditional gamma - SDR luminance range
       SMPTE ST2084
     Supported static metadata descriptors:
       Static metadata type 1
     Desired content max luminance: 115 (603.666 cd/m^2)
     Desired content max frame-average luminance: 109 (530.095 cd/m^2)
     Desired content min luminance: 7 (0.005 cd/m^2)

I forget where I heard (you, Vitaly, someone?) that integrated panels
may not have the magic gamut and tone mapping hardware, which means
that software (or display engine) must do the full correct thing.

That's another reason to not rely on magic display functionality, which
suits my plans perfectly.

I've mentioned it before but there aren't really a lot of integrated
HDR panels yet. I think we've only seen one or two without tone-mapping
ability.

Either way we probably need at least the ability to tone-map the output
on the transmitter side (SW, GPU, or display HW).

It is really interesting to see the quality of panel TM algorithm by specifying different metadata

and validate how severe loss of details which could mean no TM at all or 1DLUT  is used to soften the

clipping or 3DLUT( which has wider possibilities for TM)

To facilitate this development we may use LCMS proofing capabilities to allow simulate the image

view on high end(wide gamut display) how it may looks on low end

(narrow gamut displays or integrated panels)

I suspect on those TVs it looks like this:

   HDR Static Metadata Data Block:
     Electro optical transfer functions:
       Traditional gamma - SDR luminance range
       SMPTE ST2084
     Supported static metadata descriptors:
       Static metadata type 1

Windows has some defaults in this case and our Windows driver also has
some defaults.
Oh, missing information. Yay.

Using defaults in the 1000-2000 nits range would yield much better
tone-mapping results than assuming the monitor can support a full
10k nits.
Obviously.

As an aside, recently we've come across displays where the max
average luminance is higher than the max peak luminance. This is
not a mistake but due to how the display's dimming zones work.
IOW, the actual max peak luminance in absolute units depends on the
current image average luminance. Wonderful, but what am I (the content
producer, the display server) supposed to do with that information...

Not sure what impact this might have on tone-mapping, other than
to keep in mind that we can assume that max_avg < max_peak.
*cannot

Right

Seems like it would lead to a very different tone mapping algorithm
which needs to compute the image average luminance before it can
account for max peak luminance (which I wouldn't know how to infer). So
either a two-pass algorithm, or taking the average from the previous
frame.

I imagine that is going to be fun considering one needs to composite
different types of input images together, and the final tone mapping
might need to differ for each. Strictly thinking that might lead to an
iterative optimisation algorithm which would be quite intractable in
practise to complete for a single frame at a time.

Maybe a good approach for this would be to just consider MaxAvg = MaxPeak
in this case. At least until one would want to consider dynamic tone-mapping,
i.e. tone-mapping that is changing frame-by-frame based on content.
Dynamic tone-mapping might be challenging to do in SW but could be a possibility
with specialized HW. Though I'm not sure exactly how that HW would look like.
Maybe something like a histogram engine like Laurent mentions in
https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html.

Harry

Thanks,
pq




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux