RE: Mem2Mem V4L2 devices [RFC]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



f such design? (in
> >terms of mem-2-mem device)
>
> Why not? In a typical camera scenario, application can feed one frame and get two output frames (one
> for storing and another for sending over email (a lower resolution). I just gave an example.

You gave an example of the Y-type pipeline which start in real streaming
device (camera) which is completely different thing. Y-type CAPTURE pipeline
is quite common thing, which can be simply mapped to 2 different capture
video nodes.

In my previous mail I asked about Y-type pipeline which starts in memory. I
don't think there is any common use case for such thing.

Marek,

You can't say that. This feature is currently supported in our internal release which is
being used by our customers. So for feature parity it is required to be supported as
we can't determine how many customers are using this feature. Besides in the above
scenario that I have mentioned, following happens.

sensor -> CCDC -> Memory (video node)

Memory -> Previewer -> Resizer1 -> Memory
                                   |-> Resizer2 -> Memory

Typically application capture full resolution frame (Bayer RGB) to Memory and then use Previewer
and Resizer in memory to memory mode to do conversion to UYVY format. But application use second
resizer to get a lower resolution frame simultaneously. We would like to expose this hardware
capability to user application through this memory to memory device. 

>  I know that this Y-type design makes sense as a
> >part of the pipeline from a sensor or decoder device. But I cannot find any
> >useful use case for mem2mem version of it.
> >
> >The second case is much more trivial. One can just create two separate
> >resizer
> >devices (with their own nodes) or one resizer driver with two hardware
> >resizers underneath it. In both cases application would simply queue the
> >input
> >buffer 2 times for both transactions.
> I am assuming we are using the One node implementation model suggested by Ivan.
>
> At hardware, streaming should happen at the same time (only one bit in register). So if we have second
> node for the same, then driver needs to match the IO instance of second device with the corresponding
> request on first node and this takes us to the same complication as with 2 video nodes implementation.

Right.

> Since only one capture queue per IO instance is possible in this model (matched by buf type), I don't
> think we can scale it for 2 outputs case. Or is it possible to queue 2 output buffers of two different
> sizes to the same queue?

This can be hacked by introducing yet another 'type' (for example
SECOND_CAPTURE), but I don't like such solution. Anyway - would we really
need Y-type mem2mem device?

Yes. No hacking please! We should be able to do S_FMT for the second Resizer output and dequeue
the frame. Not sure how can we handle this in this model. 

Best regards
--
Marek Szyprowski
Samsung Poland R&D Center



--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux