RE: [RFC] V4L HDR Architecture Proposal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Hans,

Thanks for your response!

> -----Original Message-----
> From: linux-media-owner@xxxxxxxxxxxxxxx <linux-media-
> owner@xxxxxxxxxxxxxxx> On Behalf Of Hans Verkuil
> Sent: Thursday, January 23, 2020 6:36 PM
> To: Dylan Yip <dylany@xxxxxxxxxx>; Laurent Pinchart
> <laurent.pinchart@xxxxxxxxxxxxxxxx>; linux-media@xxxxxxxxxxxxxxx
> Cc: Varunkumar Allagadapa <VARUNKUM@xxxxxxxxxx>; Madhurkiran
> Harikrishnan <MADHURKI@xxxxxxxxxx>; Jianqiang Chen
> <jianqian@xxxxxxxxxx>; Hyun Kwon <hyunk@xxxxxxxxxx>; Cyril Chemparathy
> <cyrilc@xxxxxxxxxx>; Vishal Sagar <vsagar@xxxxxxxxxx>; Sandip Kothari
> <sandipk@xxxxxxxxxx>; Subhransu Sekhar Prusty <sprusty@xxxxxxxxxx>
> Subject: Re: [RFC] V4L HDR Architecture Proposal
> 
> Hi Dylan,
> 
> On 1/22/20 9:13 PM, Dylan Yip wrote:
> > Hi All,
> >
> > We are planning to add HDR10 and HDR10+ metadata support into the V4L
> framework and were hoping for some feedback before we started
> implementation.
> 
> Nice!
> 
> >
> > For context, Xilinx HDMI RX IP currently uses a AXI LITE interface where
> HDR metadata is obtained from a hardware FIFO. To access these packets a
> CPU copy is required.
> > We are in the process of migrating towards a AXI MM interface where the
> hardware will directly write HDR metadata into memory.
> > Currently the HDMI RX driver (https://github.com/Xilinx/hdmi-
> modules/blob/master/hdmi/xilinx-hdmirx.c) is modeled as a v4l subdev. This
> is linked to a DMA IP which utilizes the DMA engine APIs and registers itself
> as a video node for video data.
> >
> > HDR10 will only consist of static metadata which will come once per stream.
> However, HDR10+ will have dynamic metadata which can potentially come
> once per frame and be up to ~4000 bytes. We would like V4L architecture to
> be flexible to support both.
> 
> The key here is the difference between Extended InfoFrames that can be
> long and the others, that have a maximum size. The latter should be handled
> by controls, the first is more difficult.
> 

Are you suggesting to handle static HDR via read only v4l controls in a meta video node?

> Can you tell a bit more about how the hardware operates? Are all InfoFrames
> obtained through the hw fifo, or are some stored in registers and some go
> through the fifo?
> 

In the current implementation of the HDMI Rx IP, all InfoFrames are read from a register byte by byte which has FIFO at the back.
The register is accessible by an AXI Lite interface.
The FIFO can store maximum 8 packets. Each packet is 36 bytes in size (31 bytes data and 5 bytes ECC calculated by IP). 
InfoFrames are one type of packets. 
There are other types like General Control Packet, Audio Clock Regeneration Packet, etc. referred in Table 5-8 packet types in HDMI specification v1.4b)

In future we plan on adding an AXIMM interface in the IP to handle Dynamic HDR. The tentative behavior will be as below -
The driver will provide a buffer pointer to the IP via a register. The IP will dump the infoframes's extracted data into this buffer. 
With Frame sync, IP will return the length of the buffer in the provided buffer.

> Does the hardware set maximum sizes for specific InfoFrames or the total
> size of all InfoFrames combined? Or can it be any size?
>
Hope the above info about FIFO depth for current HDMI Rx IP answers this.
 
> Does it accept any InfoFrame or only specific InfoFrame types? Or is this
> programmable?
> 

HDMI Rx IP accepts all types of InfoFrames.

Regards
Vishal Sagar

> Regards,
> 
> 	Hans
> 
> >
> > We have 2 different proposals that we believe will work:
> >
> > A. 2 video node approach (1 for video, 1 for metadata) - This will align with
> current v4l metadata structure (i.e. uvc) but will require our HDMI RX driver
> to register a subdev and device node
> > 	a. Our HDMI RX driver will register a v4l subdev (for video data) and a
> metadata node
> > 		i. Is this acceptable?
> > 	b. Applications will qbuf/dqbuf to both video and metadata nodes for
> > each frame
> >
> > B. 1 video node approach - This will avoid mixing v4l subdev and v4l device
> node functionality inside HDMI RX driver but it strays from current v4l
> metadata architecture and also changes v4l subdev functionality
> > 	a. We would add a "read" function to v4l subdev's
> > 		i. This will also require us to add some "capabilities" field to
> subdev or be able to query for the "read" function
> > 	b. HDMI Rx driver will register a v4l subdev with "read"
> function/capability
> > 	c. Application can directly pass a buffer in the "read" function to
> HDMI RX subdev to obtain HDR metadata
> > 		i. We will need to pass subdev name from application or be
> able to query all subdevs for this "read" capability, is this acceptable?
> >
> > Please let me know your opinions on which approach is best or propose
> > another approach if these 2 are unfit. Thanks
> >
> > Best,
> > Dylan Yip
> >





[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux