Hi all, I'm examining how DRM color management properties (degamma, ctm, gamma) are applied to AMD display drivers. As far I could understand thanks Nicholas documentation on amdgpu_dm/amdgpu_dm_color, DC drivers have per-plane color correction features: * - Input gamma LUT (de-normalized) * - Input CSC (normalized) * - Surface degamma LUT (normalized) * - Surface CSC (normalized) * - Surface regamma LUT (normalized) * - Output CSC (normalized) so DM is "adapting" those DRM per-CRTC properties to fit into three of these color correction stages, which I guess are the surface stages: * - Surface degamma LUT (normalized) * - Surface CSC (normalized) * - Surface regamma LUT (normalized) I'm trying to understand what this mapping is doing. A comment mentions that is not possible to do these color corrections after blending, so, the same color correction pipe is performed on every plane before blending? (is the surface the plane?) Does this adaptation affect the expected output? Moreover, is there something that I misunderstood? :) That said, if the DRM color mgmt supports per-CRTC 3D LUT as the last step of color correction, I don't see how to accommodate it in the mapping above, but I see DC already supports programming 3D LUT on DPP. Once DRM has the 3D LUT interface and DM mapped it as a DPP property, the 3D LUT will be at the end of the color correction pipeline? Is there anything I need to worry about mapping DRM 3D LUT support? Or any advice? Thanks in advance, Melissa
Attachment:
signature.asc
Description: PGP signature