On Tue, 21 Sep 2021 14:05:05 -0400 Harry Wentland <harry.wentland@xxxxxxx> wrote: > On 2021-09-21 09:31, Pekka Paalanen wrote: > > On Mon, 20 Sep 2021 20:14:50 -0400 > > Harry Wentland <harry.wentland@xxxxxxx> wrote: > > > >> On 2021-09-15 10:01, Pekka Paalanen wrote:> On Fri, 30 Jul 2021 16:41:29 -0400 > >>> Harry Wentland <harry.wentland@xxxxxxx> wrote: > >>> > >>>> Use the new DRM RFC doc section to capture the RFC previously only > >>>> described in the cover letter at > >>>> https://patchwork.freedesktop.org/series/89506/ > >>>> > >>>> v3: > >>>> * Add sections on single-plane and multi-plane HDR > >>>> * Describe approach to define HW details vs approach to define SW intentions > >>>> * Link Jeremy Cline's excellent HDR summaries > >>>> * Outline intention behind overly verbose doc > >>>> * Describe FP16 use-case > >>>> * Clean up links > >>>> > >>>> v2: create this doc > >>>> > >>>> v1: n/a > >>>> > >>>> Signed-off-by: Harry Wentland <harry.wentland@xxxxxxx> > > > > Hi Harry! > > > > ... > > > >>>> --- > >>>> Documentation/gpu/rfc/color_intentions.drawio | 1 + > >>>> Documentation/gpu/rfc/color_intentions.svg | 3 + > >>>> Documentation/gpu/rfc/colorpipe | 1 + > >>>> Documentation/gpu/rfc/colorpipe.svg | 3 + > >>>> Documentation/gpu/rfc/hdr-wide-gamut.rst | 580 ++++++++++++++++++ > >>>> Documentation/gpu/rfc/index.rst | 1 + > >>>> 6 files changed, 589 insertions(+) > >>>> create mode 100644 Documentation/gpu/rfc/color_intentions.drawio > >>>> create mode 100644 Documentation/gpu/rfc/color_intentions.svg > >>>> create mode 100644 Documentation/gpu/rfc/colorpipe > >>>> create mode 100644 Documentation/gpu/rfc/colorpipe.svg > >>>> create mode 100644 Documentation/gpu/rfc/hdr-wide-gamut.rst ... > I think we need to talk about what 1.0 means. Apple's EDR defines 1.0 > as "reference white" or in other words the max SDR white. > > That definition might change depending on the content type. Yes, the definition of 1.0 depends on the... *cough* encoding. Semantic encoding? Sometimes it just means max signal value (like everywhere until now), sometimes it maps to something else. It might be relative (other than PQ system) or absolute (PQ system) luminance, with a fixed scale after non-linear encoding. The definition of 0.0, or { 0.0, 0.0, 0.0 } more like, is pretty much always the darkest possible black - or is it? The darkest possible black is not usually 0 cd/m², but something above that depending on both the device and the viewing environment. A display necessarily reflects some light from the environment which sets the black level of the image, even if the display itself was capable of exactly 0 cd/m². Maybe VR goggles are an exception. As a side note: if the viewing environment sets the display black level, then the environment also sets the display black's white point, and that may be different from the display's own white point. Also HVS has rods for low light vision, while color management concentrates wholly on the cones that provide color vision. So dark shades might be in the rod range where color cannot be perceived. I digress though. Then there is the whole issue of HVS adaptation which basically sets the observable dynamic range bracket (and what one considers as white I think). Minimum observable color and luminance difference depends on that bracket and the color position inside the bracket. Trying to look at a monitor in bright daylight is a painful example of these. ;-) Btw. is was an awesome experience many years ago to spend 15-30 minutes in a room lit with a pale green light only, and then walking outside. I have never ever seen so vivid and saturated reds, yellows, violets, browns(!), etc. than just after coming out of that room. That was the real world, not a display. :-) ... > > One thing I realised yesterday is that HLG displays are much better > > defined than PQ displays, because HLG defines what OOTF the display > > must implement. In a PQ system, the signal carries the full 10k nits > > range, and then the monitor must do vendor magic to display it. That's > > for tone mapping, not sure if HLG has an advantage in gamut mapping as > > well. > > > > Doesn't the metadata describe the max content white? So even if the signal > carries the full 10k nits the actual max luminance of the content should > be incoded as part of the metadata. It is in the HDR static metadata, yes, if present. There is also dynamic metadata version. However, the static metadata describes the presentation on the (professional) mastering display, more or less. Almost certainly the display an end user has is not a mastering display capable device, so arbitrary magic still needs to happen to squeeze the signal down to what the display can do. Or, I suppose, if the signal (image) does not need squeezing for people who bought the average HDR display, then people who bought high-end HDR displays will be unimpressed by the image on their display. Thinking of buying a new fancy TV and then the image looks exactly the same as in the old one. Ironically, that is exactly what color management might do to SDR content. One could expand a narrow range to a wider range, and I'm sure displays do that too for more sales, but I guess you would have the usual problems of upscaling. It's hard to invent detail where there was none recorded. ... > Did anybody start any CM doc patches in Weston or Wayland yet? There is the https://gitlab.freedesktop.org/swick/wayland-protocols/-/blob/color/unstable/color-management/color.rst we started a long time ago, and have not really touched it for a while. Since we last touched it, at least my understanding has developed somewhat. It is linked from the overview in https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/14 and if you want to propose changes, the way to do it is file a MR in https://gitlab.freedesktop.org/swick/wayland-protocols/-/merge_requests against the 'color' branch. Patches very much welcome, that doc does not need to limit itself to Wayland. :-) We also have issues tracked at https://gitlab.freedesktop.org/swick/wayland-protocols/-/issues?scope=all&utf8=%E2%9C%93&state=opened > > Pre-curve for instance could be a combination of decoding to linear > > light and a shaper for the 3D LUT coming next. That's why we don't call > > them gamma or EOTF, that would be too limiting. > > > > (Using a shaper may help to keep the 3D LUT size reasonable - I suppose > > very much like those multi-segmented LUTs.) > > > > AFAIU a 3D LUTs will need a shaper as they don't have enough precision. > But that's going deeper into color theory than I understand. Vitaly would > know better all the details around 3D LUT usage. There is a very practical problem: the sheer number of elements in a 3D LUT grows to the power of three. So you can't have very many taps per channel without storage requirements blowing up. Each element needs to be a 3-channel value, too. And then 8 bits is not enough. I'm really happy that Vitaly is working with us on Weston and Wayland. :-) He's a huge help, and I feel like I'm currently the one slowing things down by being backlogged in reviews. Thanks, pq
Attachment:
pgpDFvVpG8BFm.pgp
Description: OpenPGP digital signature