RE: Mem2Mem V4L2 devices [RFC]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Monday, October 05, 2009 8:27 PM Hiremath, Vaibhav wrote:

> > > [Hiremath, Vaibhav] IMO, this implementation is not streaming
> > model, we are trying to fit mem-to-mem
> > > forcefully to streaming.
> >
> > Why this does not fit streaming? I see no problems with streaming
> > over mem2mem device with only one video node. You just queue input
> > and output buffers (they are distinguished by 'type' parameter) on
> > the same video node.
> >
> [Hiremath, Vaibhav] Do we create separate queue of buffers based on type? I think we don't.

Why not? I really see no problems implementing such driver, especially if this heavily increases the number of use cases where such
device can be used.

> App1		App2		App3		...		AppN
>   |		 |		|		|		  |
>    -----------------------------------------------
> 				|
> 			/dev/video0
> 				|
> 			Resizer Driver
> 
> Everyone will be doing streamon, and in normal use case every application must be getting buffers from
> another module (another driver, codecs, DSP, etc...) in multiple streams, 0, 1,2,3,4....N

Right.

> Every application will start streaming with (mostly) fixed scaling factor which mostly never changes.

Right. The driver can store the scaling factors and other parameters in the private data of each opened instance of the /dev/video0
device.

> This one video node approach is possible only with constraint that, the application will always queue
> only 2 buffers with one CAPTURE and one with OUTPUT type. He has to wait till first/second gets
> finished, you can't queue multiple buffers (input and output) simultaneously.

Why do you think you cannot queue multiple buffers? IMHO can perfectly queue more than one input buffer, then queue the same number
of output buffers and then the device will process all the buffers.

> I do agree here with you that we need to investigate on whether we really have such use-case. Does it
> make sense to put such constraint on application?

What constraint?

> What is the impact? Again in case of down-scaling,
> application may want to use same buffer as input, which is easily possible with single node approach.

Right. But take into account that down-scaling is the one special case in which the operation can be performed in-place. Usually all
other types of operations (like color space conversion or rotation) require 2 buffers. Please note that having only one video node
would not mean that all operations must be done in-place. As Ivan stated you can perfectly queue 2 separate input and output buffers
into the one video node and the driver can handle this correctly.

Best regards
--
Marek Szyprowski
Samsung Poland R&D Center

--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux