Hi Tomi, On Fri, Oct 07, 2022 at 02:58:28PM +0300, Tomi Valkeinen wrote: > On 06/10/2022 14:20, Sakari Ailus wrote: > > On Thu, Feb 11, 2021 at 03:44:56PM +0200, Tomi Valkeinen wrote: > > You found an old one =). > > >> Hi all, > >> > >> On 28/03/2019 22:05, Jacopo Mondi wrote: > >>> Hello, > >>> new iteration of multiplexed stream support patch series. > >>> > >>> V3 available at: > >>> https://patchwork.kernel.org/cover/10839889/ > >>> > >>> V2 sent by Niklas is available at: > >>> https://patchwork.kernel.org/cover/10573817/ > >>> > >>> Series available at: > >>> git://jmondi.org/linux v4l2-mux/media-master/v4 > >> > >> I'm trying to understand how these changes can be used with virtual > >> channels and also with embedded data. > >> > >> I have an SoC with two CSI-2 RX ports, both of which connect to a > >> processing block with 8 DMA engines. Each of the DMA engines can be > >> programmed to handle a certain virtual channel and datatype. > >> > >> The board has a multiplexer, connected to 4 cameras, and the multiplexer > >> connects to SoC's CSI-2 RX port. This board has just one multiplexer > >> connected, but, of course, both RX ports could have a multiplexer, > >> amounting to total 8 cameras. > >> > >> So, in theory, there could be 16 streams to be handled (4 pixel streams > >> and 4 embedded data streams for both RX ports). With only 8 DMA engines > >> available, the driver has to manage them dynamically, reserving a DMA > >> engine when a stream is started. > >> > >> My confusion is with the /dev/video nodes. I think it would be logical > >> to create 8 of them, one for each DMA engine (or less, if I know there > >> is only, say, 1 camera connected, in which case 2 nodes would be > > > > For more complex devices, it is often not possible to define such a number. > > Say, put an external ISP in between the sensor and the CSI-2 receiver, and > > you may get more streams than you would from the sensor alone. > > > >> enough). But in that case how does the user know what data is being > >> received from that node? In other words, how to connect, say, > >> /dev/video0 to second camera's embedded data stream? > >> > >> Another option would be to create 16 /dev/video nodes, and document that > >> first one maps to virtual channel 0 + pixel data, second to virtual > >> channel 0 + embedded data, and so on. And only allow 8 of them to be > >> turned on at a time. But I don't like this idea much. > > > > This isn't great IMO as it is limited to pre-defined use cases. > > > >> The current driver architecture is such that the multiplexer is modeled > >> with a subdev with 4 sink pads and one source pad, the SoC's RX ports > >> are subdevs with a single sink and a single output pad, and then there > >> are the video devices connected to RX's source pad. > >> > >> And while I can connect the video node's pad to the source pad on either > >> of the RX ports, I don't think I have any way to define which stream it > >> receives. > >> > >> Does that mean that each RX port subdev should instead have 8 source > >> pads? Isn't a pad like a physical connection? There's really just one > >> output from the RX port, with multiplexed streams, so 8 pads doesn't > >> sound right. > > > > If you have eight DMAs you should always have eight video nodes. > > > > I would put one link between the sub-device and a video node, and handle > > the incoming streams by routing them to the desired video nodes. > > This is how it's been for quite a while. However, I think this model > causes problems with more programmable DMA systems, where there's no > maximum number of DMA "engines" (or the max is something like 256). But > for now those systems can just define a sensible number of DMAs (8? 16? > I guess it depends on the HW). Agreed, if we get to 256 (or more) DMA engines (or likely, in that case, DMA engine contexts), then we'll need a different API, with explicit stream support on video nodes. Hopefully someone else will solve that problem :-) -- Regards, Laurent Pinchart