Hi *, I need to implement a driver for a CSI-2 camera system on a board using the Rockchip RK3399 SoC and I'm wondering what the "proper" way of doing things would be: We need to capture variable sequences of frames where each frame has specific settings for gain/exposure and illumination (GPIO-based). Configuration of the sequences happens from userspace. My approach would be: 1) platform_device -> media_device for the whole system (camera sensor, CSI-2 PHY/ISP, LED driver, GPIOs) 2) i2c_driver for the camera -> standard v4l2 subdevice supporting V4L2_CID_EXPOSURE, etc. 3) i2c_driver for the LED driver -> v4l2 subdevice implementing V4L2_CID_FLASH interface 4) additional v4l2 device node: metadata input (sequence definitions) 5) additional v4l2 device node: metadata output (sync'ed with video output) Q1) AFAIU, this media controller device could support the Request API, and handle the entire sequence with a single call from userspace - is this correct? Q2) Am I even on the right track here, or what would be the right way(tm) to represent this kind of system? Q3) If this approach is sound, is there any way I could make use of the rkisp1 driver infrastructure to setup the CSI-2 PHY and the dma engines without extracting/rewriting parts of the code? An alternative solution could be to just use the STROBE output of the camera sensor to detect the end-of-frame and advance to the next sequence element. That would not even require any v4l2 integration, but it does require an additional interrupt handler. However, the metadata would have to be added to the frames in userspace. If, for any reason, the counter gets out of sync with the output frames this could go wrong and would be very hard to detect, because they're not synced by the v4l2 frame sequence number. Any help is appreciated, thanks a lot! -Jens