RE: [PATCH 0/5] V4L2 patches for Intel Moorestown Camera Imaging

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Forgot to add disclaimer...

These are just my opinion based on the discussions I had on the mailing list
in the past on similar topics...

Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
email: m-karicheri2@xxxxxx

>-----Original Message-----
>From: linux-media-owner@xxxxxxxxxxxxxxx [mailto:linux-media-
>owner@xxxxxxxxxxxxxxx] On Behalf Of Karicheri, Muralidharan
>Sent: Monday, September 28, 2009 11:25 AM
>To: Yu, Jinlu; Mauro Carvalho Chehab
>Cc: linux-media@xxxxxxxxxxxxxxx
>Subject: RE: [PATCH 0/5] V4L2 patches for Intel Moorestown Camera Imaging
>
>Hi,
>
>First of all, based on Media controller proposal, you would be better
>off by developing ISP also as a sub device. This way you will be able
>to make connection as below...
>
>sensor -> isp -> video node
>
>Please check out the bus parameter settings RFC from Hans. Some of these
>parameters are set using board/platform specific configuration and there
>is a new API proposed to do this. Following are candidate for this.
>
>>bus_width; width of the buss connecting sensor and ISP.
>>hpol; horizontal sync polarity
>>vpol; vertical sync polarity
>>edge; sampling edge
>
>
>>field_sel; field selection, even or odd.
>I am assuming this will be reported by the ISP based on what it receives
>from sensor for interlaced scan. If not, probably you need to add this to
>sub dev interface as a private ops if sub device can report it.
>
>>ycseq; YCbCr sequence, YCbCr or YCrCb or CbYCrY or CrYCbY
>
>This is also being discussed currently through another related RFC from SoC
>camera side. Assuming sensor outputs YCbCr and ISP is capable of changing
>the sequence (which is the case in DM355/DM6446/DM365 VPFE), then these are
>enumerated at the video node for ENUM_FMT as different frame formats.
>
>>conv422; subsampling type, co-sited 4:4:4 or non-cosited 4:4:4 or color
>>interpolation
>Again assuming these are at the ISP side, you need to use driver specific
>IOCTLs to do this. Converting ISP to sub device is the way to go based on
>the Media controller proposal that allows you to have these ioctls setting
>it in the sub device directly.
>
>>bpat; bayer sampling sequence, RGRG GBGB or GRGR BGBG or ...
>
>This is something to be discussed in the bus parameter RFC. Camera can
>output different patterns. So ISP needs to know what pattern is output
>by the sensor and bridge device probably needs to read it from the sensor
>and set it in the ISP. data format setting is also discussed in the above
>RFC or in a separate RFC. Please check it out.
>
>
>Murali Karicheri
>Software Design Engineer
>Texas Instruments Inc.
>Germantown, MD 20874
>email: m-karicheri2@xxxxxx
>
>>-----Original Message-----
>>From: linux-media-owner@xxxxxxxxxxxxxxx [mailto:linux-media-
>>owner@xxxxxxxxxxxxxxx] On Behalf Of Yu, Jinlu
>>Sent: Monday, September 28, 2009 5:26 AM
>>To: Mauro Carvalho Chehab
>>Cc: linux-media@xxxxxxxxxxxxxxx
>>Subject: RE: [PATCH 0/5] V4L2 patches for Intel Moorestown Camera Imaging
>>
>>We have a solution to this which is to make them as sensor's private data,
>>and let ISP access them by share the same structure definition.
>>
>>But I don't think it's a good idea, because it is so driver-specific and
>>can not be used by others.
>>
>>Best Regards
>>Jinlu Yu
>>UMG UPSG PRC
>>INET: 8758 1603
>>TEL:  86 10 8217 1603
>>FAX:  86 10 8286 1400
>>-----Original Message-----
>>From: linux-media-owner@xxxxxxxxxxxxxxx [mailto:linux-media-
>>owner@xxxxxxxxxxxxxxx] On Behalf Of Yu, Jinlu
>>Sent: 2009年9月27日 22:31
>>To: Mauro Carvalho Chehab
>>Cc: linux-media@xxxxxxxxxxxxxxx
>>Subject: RE: [PATCH 0/5] V4L2 patches for Intel Moorestown Camera Imaging
>>
>>Hi, Mauro
>>
>>Thank you for your suggestion on this.
>>
>>Now I have another problem. The ISP needs the following parameters of the
>>sensor to set the acquisition interface, but I can not find a suitable
>>subdev ioctls to get them from sensor driver.
>>
>>bus_width; width of the buss connecting sensor and ISP.
>>field_sel; field selection, even or odd.
>>ycseq; YCbCr sequence, YCbCr or YCrCb or CbYCrY or CrYCbY
>>conv422; subsampling type, co-sited 4:4:4 or non-cosited 4:4:4 or color
>>interpolation
>>bpat; bayer sampling sequence, RGRG GBGB or GRGR BGBG or ...
>>hpol; horizontal sync polarity
>>vpol; vertical sync polarity
>>edge; sampling edge
>>
>>Best Regards
>>Jinlu Yu
>>UMG UPSG PRC
>>INET: 8758 1603
>>TEL:  86 10 8217 1603
>>FAX:  86 10 8286 1400
>>-----Original Message-----
>>From: Mauro Carvalho Chehab [mailto:mchehab@xxxxxxxxxxxxx]
>>Sent: 2009年9月24日 19:45
>>To: Yu, Jinlu
>>Cc: linux-media@xxxxxxxxxxxxxxx
>>Subject: Re: [PATCH 0/5] V4L2 patches for Intel Moorestown Camera Imaging
>>
>>Em Thu, 24 Sep 2009 19:21:40 +0800
>>"Yu, Jinlu" <jinlu.yu@xxxxxxxxx> escreveu:
>>
>>> Hi, Hans/Guennadi
>>>
>>> I am modifying these drivers to comply with v4l2 framework. I have
>>finished replacing our buffer managing code with utility function from
>>videobuf-core.c and videobuf-dma-contig.c. Now I am working on the subdev.
>>One thing I am sure is that each sensor should be registered as a
>>v4l2_subdev and ISP (Image Signal Processor) is registered as a
>v4l2_device
>>acting as the bridge device.
>>>
>>> But we have two ways to deal with the relationship of sensor and ISP,
>and
>>we don't know which one is better. Could you help me on this?
>>>
>>> No.1. Register the ISP as a video_device (/dev/video0) and treat each of
>>the sensor (SOC and RAW) as an input of the ISP. If I want to change the
>>sensor, use the VIDIOC_S_INPUT to change input from sensor A to sensor B.
>>But I have a concern about this ioctl. Since I didn't find any code
>related
>>HW pipeline status checking and HW register setting in the implement of
>>this ioctl (e.g. vino_s_input in /drivers/media/video/vino.c). So don't I
>>have to stream-off the HW pipeline and change the HW register setting for
>>the new input? Or is it application's responsibility to stream-off the
>>pipeline and renegotiate the parameters for the new input?
>>>
>>> No.2. Combine the SOC sensor together with the ISP as Channel One and
>>register it as /dev/video0, and combine the RAW sensor together with the
>>ISP as Channel Two and register it as /dev/video1. Surely, only one
>channel
>>works at a certain time due to HW restriction. When I want to change the
>>sensor (e.g. from SOC sensor to RAW sensor), just close /dev/video0 and
>>open /dev/video1.
>>
>>The better seems to be No. 1. As you need to re-negotiate parameters for
>>switching from one sensor to another, if some app tries to change from one
>>input to another while streaming, you should just return -EBUSY, if it is
>>not
>>possible to switch (for example, if the selected format/resolution/frame
>>rate
>>is incompatible).
>>
>>
>>
>>Cheers,
>>Mauro
>>N嫥叉靣笡y氊b瞂千v豝?藓{.n?壏{睓鏱j)韰骅w*jg?秹殠娸/侁鋤罐枈?娹櫒璀??
>>摺玜囤瓽珴閔?鎗:+v墾妛鑶佶
>>N�����r��y���b�X��ǧv�^�)޺{.n�+����{���bj)���w*
>>jg��������ݢj/���z�ޖ��2�ޙ���&�)ߡ�a�����G���h��j:+v���w�٥
>N�����r��y���b�X��ǧv�^�)޺{.n�+����{���bj)���w*
>jg��������ݢj/���z�ޖ��2�ޙ���&�)ߡ�a�����G���h��j:+v���w�٥
��.n��������+%������w��{.n�����{��g����^n�r������&��z�ޗ�zf���h���~����������_��+v���)ߣ�m


[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux