Re: [PATCH] input: Add a detailed multi-touch finger data report

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Henrik Rydberg wrote:


Sorry I'm slightly late to this discussion.

> > is there hardware that can do finger identification? (i.e. thumb vs. index
> > finger)? 

Yes, in the extreme case.  There are research table systems which image the surface,
and you get an image of the hand above the surface; they then compute the fingers position
relative to the general hand outline, and which ones/how much of the hand
is touching (which are in-focus). These can also tell you something about proximity
(not yet touching the surface), very much the way magnetic tablet technologies can tell
you proximity. After processing, you get which finger(s) are touching (or have
proximity), associated with a right or left hand, for information.

> Should we accommodate for this?

Should we bother right now? The extreme often becomes the norm with time, but we need
to draw a line at a sane place.  The closely related case to identifying fingers that
I think *is* worth accommodating immediately I describe below.

> 
> I believe we should start with events that fit the general idea of
> detailed finger information, and which can be produced by at least one
> existing kernel driver, so that we can test it immediately. I believe
> the proposed set pretty much covers it. I would love to be wrong. :-)
> 
> Regarding identification, one of the harder problems involved in
> making use of finger data is that of matching an anonymous finger at a
> certain position to an identified finger, tagged with a number.  This
> is very important in order to know if the fingers moved, which finger
> did the tapping, how much rotation was made, etc. Generally, this is
> the (euclidian) bipartite matching problem, and is one of the major
> computations a multi-touch X driver needs to perform. I can imagine
> such identification features eventually ending up on a chip. Maybe
> someone more knowledgeable in hardware can give us a hint.
> 
> 
> 

I agree with Peter's point that modeling all of this (fingers, markers, etc)
as multiple pointers will cause madness to ensue.

The way I distinguish devices in my mind is by "sensors".  If there are multiple
touches, markers, fingers, users, all using the same sensor (of the same resolution),
then the information should start off life together in the same input stream: this way
the relative time ordering of events all makes sense.

Some systems (e.g. Merl's Diamond Touch), give you an ID associated with the user
(in that case, it works by knowing where you are sitting by capacitive coupling). In
this case, it is actually where the person is sitting, rather than a particular person.

Another case that will be common soon is to be able to sense and identify
markers on the surface (which can be distinguished from each other).  I know of at 
least three hardware systems able to do this. One of these will be in commodity hardware
soon enough to worry about immediately.  So having and ID reported with a touch is clearly
needed, whether thumb, index finger, or some marker. 

Whether such markers would have any user identity directly associated with them is less than
clear, though we'll certainly start giving them such identity either by convention or
fiat somewhere in the system as the events get processed.

We may also face co-located sensors, where two sensors
are geometrically on top of each other (but might even report different coordinates of
differing resolutions), but co-aligned.  I'm thinking of the Dell Latitude XT in this case, 
though I don't yet know enough about it to know if in fact its pen uses a different sensor 
than the capacitive multi-touch screen.  I'm still trying to get precise details on this device.

Another question is whether an ellipse models a touch adequately at the moment; other sensors
may report more complex geometric information.  There is a slippery slope here, of course.
In the extreme case noted above, research systems give you a full image, which seems like overkill.

I also note the current input system does not provide any mechanism or hint 
to associate an input device with a particular frame buffer or with each other.  Maybe it should, 
maybe it shouldn't... Opinions?

Hope this helps.  The problem here is to draw a line *before* we win our complexity merit badge,
while leaving things open to be extended as more instances of real hardware appears and we have
more experience.
                        - Jim

-- 
Jim Gettys <jg@xxxxxxxxxx>
One Laptop Per Child

--
To unsubscribe from this list: send the line "unsubscribe linux-input" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Media Devel]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Linux Wireless Networking]     [Linux Omap]

  Powered by Linux