RE: Mem2Mem V4L2 devices [RFC]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wednesday, October 07, 2009 4:03 PM Karicheri, Muralidharan wrote:

> >How the hardware is actually designed? I see two possibilities:
> >
> >1.
> >[input buffer] --[dma engine]----> [resizer1] --[dma]-> [mem output
> >buffer1]
> >                               \-> [resizer2] --[dma]-> [mem output
> >buffer2]
> >
> This is the case.
> >2.
> >[input buffer] ---[dma engine1]-> [resizer1] --[dma]-> [mem output buffer1]
> >                \-[dma engine2]-> [resizer2] --[dma]-> [mem output buffer2]
> >
> >In the first case we would really have problems mapping it properly to
> >video
> >nodes. But we should think if there are any use cases of such design? (in
> >terms of mem-2-mem device)
> 
> Why not? In a typical camera scenario, application can feed one frame and get two output frames (one
> for storing and another for sending over email (a lower resolution). I just gave an example.

You gave an example of the Y-type pipeline which start in real streaming
device (camera) which is completely different thing. Y-type CAPTURE pipeline
is quite common thing, which can be simply mapped to 2 different capture
video nodes.

In my previous mail I asked about Y-type pipeline which starts in memory. I
don't think there is any common use case for such thing.

>  I know that this Y-type design makes sense as a
> >part of the pipeline from a sensor or decoder device. But I cannot find any
> >useful use case for mem2mem version of it.
> >
> >The second case is much more trivial. One can just create two separate
> >resizer
> >devices (with their own nodes) or one resizer driver with two hardware
> >resizers underneath it. In both cases application would simply queue the
> >input
> >buffer 2 times for both transactions.
> I am assuming we are using the One node implementation model suggested by Ivan.
> 
> At hardware, streaming should happen at the same time (only one bit in register). So if we have second
> node for the same, then driver needs to match the IO instance of second device with the corresponding
> request on first node and this takes us to the same complication as with 2 video nodes implementation.

Right.

> Since only one capture queue per IO instance is possible in this model (matched by buf type), I don't
> think we can scale it for 2 outputs case. Or is it possible to queue 2 output buffers of two different
> sizes to the same queue?

This can be hacked by introducing yet another 'type' (for example
SECOND_CAPTURE), but I don't like such solution. Anyway - would we really
need Y-type mem2mem device?

Best regards
--
Marek Szyprowski
Samsung Poland R&D Center


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux