Re: [PATCH 0/1] [RFC] drm/fourcc: Add new unsigned R16_UINT/RG1616_UINT formats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27/06/2022 15:50, Pekka Paalanen wrote:
On Mon, 27 Jun 2022 13:40:04 +0000
Dennis Tsiang <Dennis.Tsiang@xxxxxxx> wrote:

This patch is an early RFC to discuss the viable options and
alternatives for inclusion of unsigned integer formats for the DRM API.

This patch adds a new single component 16-bit and a two component 32-bit
DRM fourcc’s that represent unsigned integer formats. The use case for
needing UINT formats, in our case, would be to support using raw buffers
for camera ISPs.

For images imported with DRM fourcc + modifier combination, the GPU
driver needs a way to determine the datatype of the format which
currently the DRM API does not provide explicitly with a notable
exception of the floating-point fourccs such as DRM_FORMAT_XRGB16161616F
as an example. As the DRM fourccs do not currently define the
interpretation of the data, should the information be made explicit in
the DRM API similarly to how it is already done in Vulkan?

The reason for introducing datatype to the DRM fourcc's is that the
alternative, for any API (e.g., EGL) that lacks the format datatype
information for fourcc/modifier combination for dma_buf interop would be
to introduce explicit additional metadata/attributes that encode this
information which then would be passed to the GPU driver but the
drawback of this is that it would require extending multiple graphics
APIs to support every single platform.

By having the DRM API expose the datatype information for formats saves
a lot of integration/verification work for all of the different graphics
APIs and platforms as this information could be determined by the DRM
triple alone for dma_buf interop.

It would be good to gather some opinions on what others think about
introducing datatypes to the DRM API.

Hi,

I didn't quite grasp where this information is necessary, and when it
is necessary, is it that simple to communicate? Does it even belong
with the pixel format at all?

Let us consider the existing problems.

All traditional integer formats in drm_fourcc.h right now are unsigned.
They get interpreted as being in the range [0.0, 1.0] for color
operations, which means converting to another bit depth works
implicitly. That's where the simplicity ends. We assume to have full
quantization range unless otherwise noted in some auxiliary data, like
KMS properties (I forget if there even was a property to say DRM
framebuffer uses limited quantization range). We assume all pixel data
is non-linearly encoded. There is no color space information. YUV-RGB
conversion matrix coefficients are handled by a KMS property IIRC.

Coming back to the nominal range [0.0, 1.0]. That's an implicit
assumption that allows us to apply things like LUTs. It already
breaks down if you choose a floating-point format instead of unsigned
integer format. Is FP pixel value 1.0 the same as nominal 1.0? Or is
the FP pixel value 255.0 the same as nominal 1.0? KMS has no way to
know or control that AFAIK.

If you had UINT format, meaning the nominal value range is [0.0, 65535.0]
(e.g. for 16 bpc) instead of [0.0, 1.0], then how does that work with a
LUT element in the color pipeline? How would it be both meaningful and
different to existing 16 bpc integer format?

What's the actual difference between R16 and R16_UINT, what difference
does it make to a GPU driver?

Maybe that is intimately dependent on the API where the pixel formats
are used?

Maybe for KMS R16 and R16_UINT would be completely equivalent, but not
for some other API?

We also need to be very careful to not load pixel format with meaning
that does not belong there. Non-linear encoding (transfer function) is
obviously something that is completely unrelated to the pixel format,
as long as the pixel format defines a conversion to nominal value
range. Color space (primaries and white point) are another thing that
has nothing to do with pixel format, and so must not be in any way
implied by pixel format.

Should a pixel format define how the raw pixel values are converted to
nominal values?

No, because we have quantization range as a separate property,
currently with "full" and "limited" understood, where "limited" means
different things depending on the color model.

Color model is defined by the pixel formats: we have RGB and YUV
formats, likewise is chroma sub-sampling.

Hmm.


Thanks,
pq

Hi Pekka,

Thanks for your comments. This is not intended to be used for KMS, where
indeed there would be no difference. This proposal is for other Graphics
APIs such as Vulkan, which requires the application to be explicit
upfront about how they will interpret the data, whether that be UNORM,
UINT .etc. We want to be able to import dma_bufs which create a VkImage
with a "_UINT" VkFormat. However there is currently no explicit mapping
between the DRM fourccs + modifiers combos to "_UINT" VkFormats. One
solution is to encode that into the fourccs, which is what this RFC is
proposing.

Thanks,
Dennis
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux