Re: [RFC] snapshot mode, flash capabilities and control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thursday, March 03, 2011 09:02:20 Guennadi Liakhovetski wrote:
> On Wed, 2 Mar 2011, Hans Verkuil wrote:
> 
> > On Wednesday, March 02, 2011 18:51:43 Guennadi Liakhovetski wrote:
> > > ...Just occurred to me:
> > > 
> > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > 
> > > > On Mon, 28 Feb 2011, Guennadi Liakhovetski wrote:
> > > > 
> > > > > On Mon, 28 Feb 2011, Hans Verkuil wrote:
> > > > > 
> > > > > > Does anyone know which drivers stop capture if there are no 
buffers available? 
> > > > > > I'm not aware of any.
> > > > > 
> > > > > Many soc-camera hosts do that.
> > > > > 
> > > > > > I think this is certainly a good initial approach.
> > > > > > 
> > > > > > Can someone make a list of things needed for flash/snapshot? So 
don't look yet 
> > > > > > at the implementation, but just start a list of functionalities 
that we need 
> > > > > > to support. I don't think I have seen that yet.
> > > > > 
> > > > > These are not the features, that we _have_ to implement, these are 
just 
> > > > > the ones, that are related to the snapshot mode:
> > > > > 
> > > > > * flash strobe (provided, we do not want to control its timing from 
> > > > > 	generic controls, and leave that to "reasonable defaults" or to 
> > > > > 	private controls)
> > > 
> > > Wouldn't it be a good idea to also export an LED (drivers/leds/) API 
from 
> > > our flash implementation? At least for applications like torch. 
Downside: 
> > > the LED API itself is not advanced enough for all our uses, and 
exporting 
> > > two interfaces to the same device is usually a bad idea. Still, 
> > > conceptually it seems to be a good fit.
> > 
> > I believe we discussed LEDs before (during a discussion about adding 
illuminator
> > controls). I think the preference was to export LEDs as V4L controls.
> 
> Unfortunately, I missed that one.
> 
> > In general I am no fan of exporting multiple interfaces. It only leads to 
double
> > maintenance and I see no noticable advantage to userspace, only confusion.
> 
> On the one hand - yes, but OTOH: think about MFDs. Also think about some 
> other functions internal to cameras, like i2c busses. Before those I2C 
> busses have been handled internally, but we now prefer properly exporting 
> them at the system level and abstracting devices on them as normal i2c 
> devices. Think about audio, say, on HDMI. I don't think we have any such 
> examples in the mainline atm, but if you have to implement an HDMI 
> output as a v4l2 device - you will export a standard audio interface too, 
> and they probably will share some register spaces, at least on the PHY.

And the fact that audio is handled through a separate device has always been a 
source of problems. That said, the complexity of audio and the fact that you 
want to make it available to audio applications overrides the problems 
introduced by requiring separate devices/APIs.

> Think about cameras with a separate illumitation sensor (yes, I have such 
> a webcam, which has a separate sensor window, used to control its "flash" 
> LEDs, no idea whether that's also available to the user, it works 
> automatically - close the sensor, LEDs go on;)) - wouldn't you export it 
> as an "ambient light sensor" device?

For me it would depend on how it is used. If it is a 'random' LED that does 
relate to the general functioning of the device, then the standard led API is 
perfectly fine. But I would associate it more with test LEDs on a developer 
board, not with LEDs that are clearly part of the device.

One other advantage of having this as e.g. controls is that they will appear 
in the list of V4L2 controls that most apps show. It is immediately available 
to the end user. If we would do this as a LED driver then apps would need to 
find the LEDs (presumably using the media controller), somehow discover what 
the LED is for and then show the possible functions of the LED to the end 
user.

Frankly, that's never going to happen.

> Wouldn't using a standard API like the LED one make it easier to cover the 
> variety of implementations like: sensor-strobe driven, external dedicated 
> flash controller, coupled with the sensor, primitive GPIO- or PWM-operated 
> light. The LED API also has an in-kernel part (triggers) and a 
> user-interface (sysfs), which is also something, that we need. Consider a 
> case, when you have some LED-controller on the system, that controls 
> several LEDs, some for camera status, some for other system statuses.
> 
> So, not sure...

You have more flexibility on the SoC since you can make the reasonable 
assumption that the software running on it knows the hardware. So I am not 
saying that you should never do it using the standard LED driver. But here too 
I think it depends on how the LED is implemented in hardware: if it is clearly 
specific to the camera flash (e.g. because it is a separate i2c flash 
controller), then I would expect to see it implemented as a control (or at 
least as part of the V4L2 API). If it is controlled through a generic LED 
controller, then it might more sense to use the LED driver and add a reference 
to it in the media controller.

But I suspect that that will be the exception rather than the rule.

Regards,

	Hans

> 
> Thanks
> Guennadi
> ---
> Guennadi Liakhovetski, Ph.D.
> Freelance Open-Source Software Developer
> http://www.open-technology.de/
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux