Re: [PATCH v7 1/9] media: v4l2: Extend pixel formats to unify single/multi-planar handling (and more)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 6/29/23 00:27, Nicolas Dufresne wrote:
CAUTION: Email originated externally, do not click links or open attachments unless you recognize the sender and know the content is safe.


Hi,

please avoid HTML replies on the mailing list.
 I need to blame outlook.
Le mardi 27 juin 2023 à 14:42 +0800, Hsia-Jun Li a écrit :
+/**
+ * struct v4l2_ext_pix_format - extended multiplanar format definition
+ * @type:          enum v4l2_buf_type; type of the data stream
+ * @width:         image width in pixels
+ * @height:                image height in pixels
+ * @pixelformat:   little endian four character code (fourcc)
+ * @modifier:              modifier applied to the format (used for tiled formats
+ *                 and other kind of HW-specific formats, like compressed
+ *                 formats) as defined in drm_fourcc.h
+ * @field:         enum v4l2_field; field order (for interlaced video)
+ * @colorspace:            enum v4l2_colorspace; supplemental to pixelformat
+ * @plane_fmt:             per-plane information
+ * @flags:         format flags (V4L2_PIX_FMT_FLAG_*)
+ * @ycbcr_enc:             enum v4l2_ycbcr_encoding, Y'CbCr encoding
+ * @hsv_enc:               enum v4l2_hsv_encoding, HSV encoding
+ * @quantization:  enum v4l2_quantization, colorspace quantization
+ * @xfer_func:             enum v4l2_xfer_func, colorspace transfer function
+ * @reserved:              drivers and applications must zero this array
+ */
+struct v4l2_ext_pix_format {
+   __u32                           type;
+   __u32                           width;
+   __u32                           height;
+   __u32                           pixelformat;
+   __u64                           modifier;
+   __u32                           field;
+   __u32                           colorspace;
+
+   struct v4l2_plane_pix_format    plane_fmt[VIDEO_MAX_PLANES];
+   __u8                            flags;
+    union {
+           __u8                            ycbcr_enc;
+           __u8                            hsv_enc;
+   };
+   __u8                            quantization;
+   __u8                            xfer_func;





     I heard that a suggestion that we could remove colorimetry fields
       here.
     Although those are useless for codec M2M drivers if no pixel
       formats translation invoked.
     Even HDMI(DRM) cares about colorspace.
     For example if a downsink(TV) shows RGB formats,  with an YUV
       input frame buffer, colorimetry would be important or the wrong
       EOTF would be used. If YUV is MPEG range(linear EOTF) while a
       non-linear EOFT (full range) is used, you would found the black is
       not black enough while the white looks a gray. Also color bias
       would happen.
     This problem may not happen to a ultra high resolution TV while
       only YUV type color formats are supported due to HDMI bandwidth
       limitation.
     The problem I want to raise is the time cost for enumeration.
       Each pixel format with a colorimetry setting would invoke a
       ioctl(). For the application likes Gstreamer would enum all the
       possible colorimetries.
     It would be better we could have something like DRM blob id that
       application could copy the data from a non-DMA buffer from the
       kernel.
This is a good topic. Colorimetry could indeed be moved away from the format,
considering they cannot be enumerated. It remains that this information needs to
be passed around, and the format of a blob in media space is not has restricted
as with display HW. I think keeping an "exploded version" of the colorimetry
remains needed.

Remember though that for stateful decoder, were the information could be stored
in the bitstream, the decoder is responsible for returning that information.
Currently its passed through the G_FMT call, it would need to be replaced with a
control, similar to the HDR10 static metadata. If the colorimetry is no longer
static in the future, and may change while streaming, one option would be RO
request. This was foreseen for HDR10+ and Dolby Vision metadata notably, though
other options exists.

That is why I want to promote importing a whole buffer(framebuffer id) from drm.

We can't pass such DMA metadate with the main data(graphics here) in a queue.


There exist known decoders that can do YUV to RGB conversion using an inline
post procesor (VC8000D and newer is an example), and for these to return correct
colors, the colorimetry information needs to be passed. So its not strictly
useless.

In short, if we drop colorimetry from format, we also need to go ahead and

I am worrying when people setup up a UVC device(usb camera), they may ignore this step.

Camera could be in a BT.601 colorspace than BT.709 which has a slight difference. And BT.601 has a full range variant.
design a replacement for it, that allow for the application to detect changes.

regards,
Nicolas

--
Hsia-Jun(Randy) Li




[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux