Re: Kinect sensor and Linux kernel driver.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 12/06/2010 10:18 PM, Antonio Ospite wrote:
Hi,

a first, very simplified, linux kernel driver for the Kinect sensor
device is now available, so you now can use it as a ridiculously
expensive webcam with any v4l2 application.

Here's the code:
http://git.ao2.it/gspca_kinect.git/


Great!

And here's some background about it (meant also for non OpenKinect
folks):
http://ao2.it/blog/2010/12/06/kinect-linux-kernel-driver

As you can see this driver is just some "advanced copy&paste" from
libfreenect, plus reducing the packets scan routine to the bare minimum.
Taking some code from libfreenect should be OK as it comes under GPLv2
(dual-licensed with an Apache license) but let me know if you think
there could be any issues.

The gspca framework proved itself very precious once again —thanks
Jean-François—, for this simple proof-of-concept driver it took care of
the whole isoc transfer setup for us.

Now the hard part begins, here's a loose TODO-list:
   - Discuss the "fragmentation problem":
      * the webcam kernel driver and the libusb backend of libfreenect
        are not going to conflict each other in practice, but code
        duplication could be avoided to some degree; we could start
        listing the advantages and disadvantages of a v4l2 backend
        opposed to a libusb backend for video data in libfreenct (don't
        think in terms of userspace/kernelspace for now).

I think that being able to use the kinect as just a webcam in apps like cheese,
skype and google chat is a major advantage of the kernel driver. I also
think that in the long run it is not useful to have 2 different drivers.

So to me the long term goal would be a kernel driver exposing all functionality
in such a way that existing apps can fully use the kinect as a webcam
(including tilt/pan control) and also exposing the kinect's extra functionality
of course.

I don't see libfreenect going away, but I see it talking to the v4l2 device
nodes rather then talk to the device directly.

<snip>

   - Check if gspca can handle two video nodes for the same USB device
     in a single driver (Kinect sensor uses ep 0x81 for color data and
     ep 0x82 for depth data).

Currently gspca cannot handle 2 streaming endpoints / 2 video nodes for 1
usb device. I've been thinking about this and I think that the simplest solution
is to simply pretend the kinects video functionality consists of 2 different
devices / usb interfaces in a multifunction device. Even though it does not
what I'm proposing is for the kinect driver to call: gspca_dev_probe twice with
2 different sd_desc structures, thus creating 2 /dev/video nodes, framequeues and
isoc management variables.

This means that the alt_xfer function in gspca.c needs to be changed to not
always return the first isoc ep. We need to add an ep_nr variable to the
cam struct and when that is set alt_xfer should search for the ep with that
number and return that (unless the wMaxPacketSize for that ep is 0 in the
current alt setting in which case NULL should be returned).

   - Decide if we want two separate video nodes, or a
     combined RGB-D data stream coming from a single video device node.
     (I haven't even looked at the synchronization logic yet).

I think 2 separate nodes is easiest, also see above.

Regards,

Hans
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux