RE: Mem2Mem V4L2 devices [RFC]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Monday, October 05, 2009 7:59 AM Hiremath, Vaibhav wrote:

> -----Original Message-----
> From: linux-media-owner@xxxxxxxxxxxxxxx [mailto:linux-media-owner@xxxxxxxxxxxxxxx] On Behalf Of
> Hiremath, Vaibhav
> Sent: Monday, October 05, 2009 7:59 AM
> To: Ivan T. Ivanov; Marek Szyprowski
> Cc: linux-media@xxxxxxxxxxxxxxx; kyungmin.park@xxxxxxxxxxx; Tomasz Fujak; Pawel Osciak
> Subject: RE: Mem2Mem V4L2 devices [RFC]
> 
> 
> > -----Original Message-----
> > From: linux-media-owner@xxxxxxxxxxxxxxx [mailto:linux-media-
> > owner@xxxxxxxxxxxxxxx] On Behalf Of Ivan T. Ivanov
> > Sent: Friday, October 02, 2009 9:55 PM
> > To: Marek Szyprowski
> > Cc: linux-media@xxxxxxxxxxxxxxx; kyungmin.park@xxxxxxxxxxx; Tomasz
> > Fujak; Pawel Osciak
> > Subject: Re: Mem2Mem V4L2 devices [RFC]
> >
> >
> > Hi Marek,
> >
> >
> > On Fri, 2009-10-02 at 13:45 +0200, Marek Szyprowski wrote:
> > > Hello,
> > >
> <snip>
> 
> > > image format and size, while the existing v4l2 ioctls would only
> > refer
> > > to the output buffer. Frankly speaking, we don't like this idea.
> >
> > I think that is not unusual one video device to define that it can
> > support at the same time input and output operation.
> >
> > Lets take as example resizer device. it is always possible that it
> > inform user space application that
> >
> > struct v4l2_capability.capabilities ==
> > 		(V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT)
> >
> > User can issue S_FMT ioctl supplying
> >
> > struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
> > 		  .pix  = width x height
> >
> > which will instruct this device to prepare its output for this
> > resolution. after that user can issue S_FMT ioctl supplying
> >
> > struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
> >    		  .pix  = width x height
> >
> > using only these ioctls should be enough to device driver
> > to know down/up scale factor required.
> >
> > regarding color space struct v4l2_pix_format have field
> > 'pixelformat'
> > which can be used to define input and output buffers content.
> > so using only existing ioctl's user can have working resizer device.
> >
> > also please note that there is VIDIOC_S_CROP which can add
> > additional
> > flexibility of adding cropping on input or output.
> >
> [Hiremath, Vaibhav] I think this makes more sense in capture pipeline, for example,
> 
> Sensor/decoder -> previewer -> resizer -> /dev/videoX
> 

I don't get this. In strictly capture pipeline we will get one video node anyway. 

However the question is how we should support a bit more complicated pipeline.

Just consider a resizer module and the pipeline:

sensor/decoder -[bus]-> previewer -> [memory] -> resizer -> [memory]

([bus] means some kind of internal bus that is completely interdependent from the system memory)

Mapping to video nodes is not so trivial. In fact this pipeline consist of 2 independent (sub)pipelines connected by user space
application:

sensor/decoder -[bus]-> previewer -> [memory] -[user application]-> [memory] -> resizer -> [memory]

For further analysis it should be cut into 2 separate pipelines: 

a. sensor/decoder -[bus]-> previewer -> [memory]
b. [memory] -> resizer -> [memory]

Again, mapping the first subpipeline is trivial:

sensor/decoder -[bus]-> previewer -> /dev/video0

But the last, can be mapped either as:

/dev/video1 -> resizer -> /dev/video1
(one video node approach)

or

/dev/video1 -> resizer -> /dev/video2
(2 video nodes approach).


So at the end the pipeline would look like this:

sensor/decoder -[bus]-> previewer -> /dev/video0 -[user application]-> /dev/video1 -> resizer -> /dev/video2

or 

sensor/decoder -[bus]-> previewer -> /dev/video0 -[user application]-> /dev/video1 -> resizer -> /dev/video1

> > last thing which should be done is to QBUF 2 buffers and call
> > STREAMON.
> >
> [Hiremath, Vaibhav] IMO, this implementation is not streaming model, we are trying to fit mem-to-mem
> forcefully to streaming.

Why this does not fit streaming? I see no problems with streaming over mem2mem device with only one video node. You just queue input
and output buffers (they are distinguished by 'type' parameter) on the same video node.

> We have to put some constraints -
> 
> 	- Driver will treat index 0 as input always, irrespective of number of buffers queued.
> 	- Or, application should not queue more that 2 buffers.
> 	- Multi-channel use-case????
> 
> I think we have to have 2 device nodes which are capable of streaming multiple buffers, both are
> queuing the buffers.

In one video node approach there can be 2 buffer queues in one video node, for input and output respectively.

> The constraint would be the buffers must be mapped one-to-one.

Right, each queued input buffer must have corresponding output buffer.

Best regards
--
Marek Szyprowski
Samsung Poland R&D Center


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux