Hi, I am working on the Jetson NX platform and have a camera related problem. I am using two RPI HQ (Sony IMX477 based) cameras via the nvarguscamerasrc gstreamer module. When using the two cameras, as the examples demonstrate, in gstreamer, they each run their ISP algorithms seperately (white balance, gain, exposure etc.). A part of the white balance algorithm use the actual pixels in the image to determine white balance, therefore resulting in two different toned images. Especially if the scene is not evenly lit. This is a problem as I need to stitch the images as a single panorama. To get around this problem I am thinking about approaching the cameras on the device level. (/dev/video0 and /dev/video1) I have interfaced a single camera through the V4L2 interface and I can retrieve raw bayer frames as well at setting exposure and gain through ioctl calls, so I know the functionality needed to hand over image data I am therefore thinking about writing a kernel module that would wrap two cameras at V4L2 level and expose them as a single /dev/video2 device. I would need to merge image data and route camera settings out to both cameras. (exposure & gain) If this could succeed I would be able to use the gstreamer modules as usual. I have tried libfuse's cuse interface, but that does not expose mmap functions. So I have found myself following various kernel module tutorials to get a loadable kernel module up, that can respond to read, write, ioctl and mmap calls. What I am missing is an example of how to use another device handle from within a kernel module. I am therefore on a lookout of an example, as I guess I am not the first person attemping to wrap a device. I guess network snooping would be done the same way. Do you know any samples that I could learn from? Especially covering how to use other devices from inside a kernel module. kind regards Jesper