Hi Dylan, On 1/22/20 9:13 PM, Dylan Yip wrote: > Hi All, > > We are planning to add HDR10 and HDR10+ metadata support into the V4L framework and were hoping for some feedback before we started implementation. Nice! > > For context, Xilinx HDMI RX IP currently uses a AXI LITE interface where HDR metadata is obtained from a hardware FIFO. To access these packets a CPU copy is required. > We are in the process of migrating towards a AXI MM interface where the hardware will directly write HDR metadata into memory. > Currently the HDMI RX driver (https://github.com/Xilinx/hdmi-modules/blob/master/hdmi/xilinx-hdmirx.c) is modeled as a v4l subdev. This is linked to a DMA IP which utilizes the DMA engine APIs and registers itself as a video node for video data. > > HDR10 will only consist of static metadata which will come once per stream. However, HDR10+ will have dynamic metadata which can potentially come once per frame and be up to ~4000 bytes. We would like V4L architecture to be flexible to support both. The key here is the difference between Extended InfoFrames that can be long and the others, that have a maximum size. The latter should be handled by controls, the first is more difficult. Can you tell a bit more about how the hardware operates? Are all InfoFrames obtained through the hw fifo, or are some stored in registers and some go through the fifo? Does the hardware set maximum sizes for specific InfoFrames or the total size of all InfoFrames combined? Or can it be any size? Does it accept any InfoFrame or only specific InfoFrame types? Or is this programmable? Regards, Hans > > We have 2 different proposals that we believe will work: > > A. 2 video node approach (1 for video, 1 for metadata) - This will align with current v4l metadata structure (i.e. uvc) but will require our HDMI RX driver to register a subdev and device node > a. Our HDMI RX driver will register a v4l subdev (for video data) and a metadata node > i. Is this acceptable? > b. Applications will qbuf/dqbuf to both video and metadata nodes for each frame > > B. 1 video node approach - This will avoid mixing v4l subdev and v4l device node functionality inside HDMI RX driver but it strays from current v4l metadata architecture and also changes v4l subdev functionality > a. We would add a "read" function to v4l subdev's > i. This will also require us to add some "capabilities" field to subdev or be able to query for the "read" function > b. HDMI Rx driver will register a v4l subdev with "read" function/capability > c. Application can directly pass a buffer in the "read" function to HDMI RX subdev to obtain HDR metadata > i. We will need to pass subdev name from application or be able to query all subdevs for this "read" capability, is this acceptable? > > Please let me know your opinions on which approach is best or propose another approach if these 2 are unfit. Thanks > > Best, > Dylan Yip >