v4l2 device property framework in userspace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I was wondering if it makes sense to raise a discussion about a few
aspects listed below - my apology, if this might be old coffee, I
haven't been following this list for long.

Since older kernels didn't have the matching functionality, we (a few
losely connected developers) had "hacked" a userspace framework to
address various extra features (multi sensor head, realtime stuff or
special sensor properties). So, our kernel driver (specific to the PPI
port of the Blackfin architecture) is covering frame acquisition only,
all sensor specific properties (that were historically rather to be
integrated into the v4l2 system) are controller from userspace or over
network using our netpp library (which was just released into opensource).

The reasons for this were:
1. 100's of register controlling various special properties on some SoC
sensors
2. One software and kernel should work with all sorts of camera
configuration
3. I'm lazy and hate to do a lot of boring code writing (ioctls()..).
Also, we didn't want to bloat the kernel with property tables.
4. Some implementations did not have much to do with classic "video"

So nowadays we write or parse sensor properties into XML files and
generate a library for it that wraps all sensor raw entities (registers
and bits) into named entities for quick remote control and direct access
to peripherals on the embedded target during the prototyping phase (this
is what netpp does for us).

Now, the goal is to opensource stuff from the Blackfin-Side, too (as
there seems to be no official v4l2 driver at the moment). Obviously, a
lot of work has been done meanwhile on the upstream v4l2 side, but since
I'm not completely into it yet, I'd like to ask the experts:

1. Can we do multi sensor configurations on a tristated camera bus with
the current kernel framework?
2. Is there a preferred way to route ioctls() back to userspace
"property handlers", so that standard v4l2 ioctls() can be implemented
while special sensor properties are still accessible through userspace?
3. Has anyone measured latencies (or is aware of such) with respect to
process response to a just arrived video frame within the RT_PREEMPT
context? (I assume any RT_PREEMPT latency research could be generalized
to video, but asking anyhow)
4. For some applications it's mandatory to queue commands that are
commited to a sensor immediately during a frame blank. This makes the
shared userspace and kernel access for example to an SPI bus rather
tricky. Can this be solved with the current (new) v4l2 framework?

Cheers,

- Martin
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux