RE: [RFC PATCH v3 1/6] drm/doc: Color Management and HDR10 RFC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: sebastian@xxxxxxxxxxxxxxxxx <sebastian@xxxxxxxxxxxxxxxxx>
> Sent: Monday, August 16, 2021 7:07 PM
> To: Harry Wentland <harry.wentland@xxxxxxx>
> Cc: Brian Starkey <brian.starkey@xxxxxxx>; Sharma, Shashank
> <shashank.sharma@xxxxxxx>; amd-gfx@xxxxxxxxxxxxxxxxxxxxx; dri-
> devel@xxxxxxxxxxxxxxxxxxxxx; ppaalanen@xxxxxxxxx; mcasas@xxxxxxxxxx;
> jshargo@xxxxxxxxxx; Deepak.Sharma@xxxxxxx; Shirish.S@xxxxxxx;
> Vitaly.Prosyak@xxxxxxx; aric.cyr@xxxxxxx; Bhawanpreet.Lakha@xxxxxxx;
> Krunoslav.Kovac@xxxxxxx; hersenxs.wu@xxxxxxx;
> Nicholas.Kazlauskas@xxxxxxx; laurentiu.palcu@xxxxxxxxxxx;
> ville.syrjala@xxxxxxxxxxxxxxx; nd@xxxxxxx; Shankar, Uma
> <uma.shankar@xxxxxxxxx>
> Subject: Re: [RFC PATCH v3 1/6] drm/doc: Color Management and HDR10 RFC
> 
> On 2021-08-16 14:40, Harry Wentland wrote:
> > On 2021-08-16 7:10 a.m., Brian Starkey wrote:
> >> On Fri, Aug 13, 2021 at 10:42:12AM +0530, Sharma, Shashank wrote:
> >>> Hello Brian,
> >>> (+Uma in cc)
> >>>

Thanks Shashank for cc'ing me. Apologies for being late here. Now seems
all stakeholders are back so we can resume the UAPI discussion on color.

> >>> Thanks for your comments, Let me try to fill-in for Harry to keep
> >>> the design discussion going. Please find my comments inline.
> >>>
> >
> > Thanks, Shashank. I'm back at work now. Had to cut my trip short due
> > to rising Covid cases and concern for my kids.
> >
> >>> On 8/2/2021 10:00 PM, Brian Starkey wrote:
> >>>>
> >>
> >> -- snip --
> >>
> >>>>
> >>>> Android doesn't blend in linear space, so any API shouldn't be
> >>>> built around an assumption of linear blending.
> >>>>
> >
> > This seems incorrect but I guess ultimately the OS is in control of
> > this. If we want to allow blending in non-linear space with the new
> > API we would either need to describe the blending space or the
> > pre/post-blending gamma/de-gamma.
> >
> > Any idea if this blending behavior in Android might get changed in the
> > future?
> 
> There is lots of software which blends in sRGB space and designers adjusted to the
> incorrect blending in a way that the result looks right.
> Blending in linear space would result in incorrectly looking images.
> 

I feel we should just leave it to userspace to decide rather than forcing linear or non
Linear blending in driver.

> >>>
> >>> If I am not wrong, we still need linear buffers for accurate Gamut
> >>> transformation (SRGB -> BT2020 or other way around) isn't it ?
> >>
> >> Yeah, you need to transform the buffer to linear for color gamut
> >> conversions, but then back to non-linear (probably sRGB or gamma 2.2)
> >> for actual blending.
> >>
> >> This is why I'd like to have the per-plane "OETF/GAMMA" separate from
> >> tone-mapping, so that the composition transfer function is
> >> independent.
> >>
> >>>
> >>
> >> ...
> >>
> >>>>> +
> >>>>> +Tonemapping in this case could be a simple nits value or `EDR`_
> >>>>> +to
> >>>>> describe
> >>>>> +how to scale the :ref:`SDR luminance`.
> >>>>> +
> >>>>> +Tonemapping could also include the ability to use a 3D LUT which
> >>>>> might be
> >>>>> +accompanied by a 1D shaper LUT. The shaper LUT is required in
> >>>>> order to
> >>>>> +ensure a 3D LUT with limited entries (e.g. 9x9x9, or 17x17x17)
> >>>>> operates
> >>>>> +in perceptual (non-linear) space, so as to evenly spread the
> >>>>> limited
> >>>>> +entries evenly across the perceived space.
> >>>>
> >>>> Some terminology care may be needed here - up until this point, I
> >>>> think you've been talking about "tonemapping" being luminance
> >>>> adjustment, whereas I'd expect 3D LUTs to be used for gamut
> >>>> adjustment.
> >>>>
> >>>
> >>> IMO, what harry wants to say here is that, which HW block gets
> >>> picked and how tone mapping is achieved can be a very driver/HW
> >>> specific thing, where one driver can use a 1D/Fixed function block,
> >>> whereas another one can choose more complex HW like a 3D LUT for the
> >>> same.
> >>>
> >>> DRM layer needs to define only the property to hook the API with
> >>> core driver, and the driver can decide which HW to pick and
> >>> configure for the activity. So when we have a tonemapping property,
> >>> we might not have a separate 3D-LUT property, or the driver may fail
> >>> the atomic_check() if both of them are programmed for different
> >>> usages.
> >>
> >> I still think that directly exposing the HW blocks and their
> >> capabilities is the right approach, rather than a "magic" tonemapping
> >> property.
> >>
> >> Yes, userspace would need to have a good understanding of how to use
> >> that hardware, but if the pipeline model is standardised that's the
> >> kind of thing a cross-vendor library could handle.
> >>
> >
> > One problem with cross-vendor libraries is that they might struggle to
> > really be cross-vendor when it comes to unique HW behavior. Or they
> > might pick sub-optimal configurations as they're not aware of the
> > power impact of a configuration. What's an optimal configuration might
> > differ greatly between different HW.
> >
> > We're seeing this problem with "universal" planes as well.
> 
> I'm repeating what has been said before but apparently it has to be said
> again: if a property can't be replicated exactly in a shader the property is useless. If
> your hardware is so unique that it can't give us the exact formula we expect you
> cannot expose the property.
> 
> Maybe my view on power consumption is simplistic but I would expect enum < 1d lut
> < 3d lut < shader. Is there more to it?
> 
> Either way if the fixed KMS pixel pipeline is not sufficient to expose the intricacies of
> real hardware the right move would be to make the KMS pixel pipeline more
> dynamic, expose more hardware specifics and create a hardware specific user space
> like mesa. Moving the whole compositing with all its policies and decision making
> into the kernel is exactly the wrong way to go.
> 

I agree here, we can give flexibility to userspace to decide how it wants to use the hardware
blocks. So exposing the hardware capability to userspace and then servicing on its behalf would be
the right way to go for driver I believe. Any compositor or userspace can define its own policy and
drive the hardware.

We already have done that with crtc level color properties. We can do the same for plane color. HDR
will be just be an extension that way.

> Laurent Pinchart put this very well:
> https://lists.freedesktop.org/archives/dri-devel/2021-June/311689.html
> 
> >> It would definitely be good to get some compositor opinions here.
> >>
> >
> > For this we'll probably have to wait for Pekka's input when he's back
> > from his vacation.
> >
Yeah, Pekka's input would be really useful here.

We can work together Harry to come up with unified UAPI's which caters to general purpose color hardware
pipeline. Just floated a RFC series with a UAPI proposal, link below:
https://patchwork.freedesktop.org/series/90826/

Please check and share your feedback.

Regards,
Uma Shankar
> >> Cheers,
> >> -Brian
> >>




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux