Re: [RFC] Microsoft Touch Mouse driver [was: Re: your mail]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Op 25-01-12 14:26, Henrik Rydberg schreef:
+
+struct mstm_state {
+	bool advance_flag;
+	int x;
+	int y;
+	unsigned char last_timestamp;
+	unsigned char data[MSTM_DATA_WIDTH][MSTM_DATA_HEIGHT];
+};
The ability to simply send an "input screen" would be perfect
here. This device may be on the border of what can/should be handled
via input events. A memory mapped driver, uio-based or something
similar, could be an option.

Op 30-01-12 08:27, Dmitry Torokhov schreef:
On Wed, Jan 25, 2012 at 04:00:35PM +0100, Henrik Rydberg wrote:

One possible option could be to use the
slots, but only send ABS_MT_TOUCH_MAJOR or ABS_MT_PRESSURE, nothing
else. The device would (rightfully) not be recognized as MT since the
position is missing, all data would be available for processing in
userspace, and bandwidth would be minimized since there could only be
so many changes coming in per millisecond.
So how does userspace then finds out where these pressure points are
located?
Or do you mean to just dump all data to user space (15 * 13 *
sizeof(ABS_MT_PRESSURE value) + overhead)?
Having each pressure point represented by one slot id was the idea.
Userspace would have to know how the points are mapped, of
course. Still not overly happy about the general fit, though. Dmitry?
I am having doubts that this device, as it is, is suitable for input
interface; I really do not think that bastardizing slot IDs is good
idea. Unless we move all computation necessary to identify individual
contacts into the kernel (and then use standard MT protocol), I'd
recommend looking into hidraw + uinput solution.

Ok, so it's pretty clear the consensus is that there should be a different interface to push the data to user space. My initial reaction is V4L, as the data is basically a stream of monochrome frames, this kind of fits the video description. Also, this could be of some benefit to all those emulate-multi-touch-using-webcam projects. Other options I see are mmap on the input device, creating a separate (char?) device, UIO or even having the whole thing in userspace.

However, there's one tiny hack I'd like to have and I'm unsure whether this can be done from userspace: as the mouse has only one physical push button, it can't distinguish between left, middle or right click. Obviously, it can do this based on the information it gathers from its touch surface, however the firmware has only the following dumb implementation: when only one finger is present on the right part of the surface, assume right click; otherwise assume left click. This means that you need to lift your left finger when performing a right click.

Now, the receiver is represented as 3 HID devices: one keyboard descriptor, one mouse descriptor and one miscellaneous/control descriptor (containing touch surface data). My plan was to intercept mouse clicks on the mouse descriptor and look at what part of touch surface (left/right) has the most pressure and, if needed, swap the click to a right click.

Any hints on how to do this in userspace? Of course I could hack up xf86-input-evdev, but I'd like to do this at the highest possible layer, ie the input subsystem.

--
Maurus Cuelenaere

--
To unsubscribe from this list: send the line "unsubscribe linux-input" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Media Devel]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Linux Wireless Networking]     [Linux Omap]

  Powered by Linux