Re: [RFC v1 0/4] Input: support virtual objects on touchscreens

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Javier,

On Thu, Apr 27, 2023 at 05:59:42PM +0200, Javier Carrasco wrote:
> Hi,
> 
> On 25.04.23 18:02, Jeff LaBundy wrote:
> > Hi Thomas,
> > 
> > On Tue, Apr 25, 2023 at 05:29:39PM +0200, Thomas Weißschuh wrote:
> >> Hi Javier,
> >>
> >> On 2023-04-25 13:50:45+0200, Javier Carrasco wrote:
> >>> Some touchscreens are shipped with a physical layer on top of them where
> >>> a number of buttons and a resized touchscreen surface might be available.
> >>>
> >>> In order to generate proper key events by overlay buttons and adjust the
> >>> touch events to a clipped surface, these patches offer a documented,
> >>> device-tree-based solution by means of helper functions.
> >>> An implementation for a specific touchscreen driver is also included.
> >>>
> >>> The functions in ts-virtobj provide a simple workflow to acquire
> >>> physical objects from the device tree, map them into the device driver
> >>> structures as virtual objects and generate events according to
> >>> the object descriptions.
> >>>
> >>> This solution has been tested with a JT240MHQS-E3 display, which uses
> >>> the st1624 as a touchscreen and provides two overly buttons and a frame
> >>> that clips its effective surface.
> >>
> >> There are quite a few of notebooks from Asus that feature a printed
> >> numpad on their touchpad [0]. The mapping from the touch events to the
> >> numpad events needs to happen in software.
> > 
> > That example seems a kind of fringe use-case in my opinion; I think the
> > gap filled by this RFC is the case where a touchscreen has a printed
> > overlay with a key that represents a fixed function.
> 
>  Exactly, this RFC addresses exactly such printed overlays.
> > 
> > One problem I do see here is something like libinput or multitouch taking
> > hold of the input device, and swallowing the key presses because it sees
> > the device as a touchscreen and is not interested in these keys.
> 
> Unfortunately I do not know libinput or multitouch and I might be
> getting you wrong, but I guess the same would apply to any event
> consumer that takes touchscreens as touch event producers and nothing else.
> 
> Should they not check the supported events from the device instead of
> making such assumptions? This RFC adds key events defined in the device
> tree and they are therefore available and published as device
> capabilities. That is for example what evtest does to report the
> supported events and they are then notified accordingly. Is that not the
> right way to do it?

evtest is just that, a test tool. It's handy for ensuring the device emits
the appropriate input events in response to hardware inputs, but it is not
necessarily representative of how the input device may be used in practice.

I would encourage you to test this solution with a simple use-case such as
Raspbian, and the virtual keys mapped to easily recognizable functions like
volume up/down.

Here, you will find that libinput will grab the device and declare it to be
a touchscreen based on the input events it advertises. However, you will not
see volume up/down keys are handled.

If you break out the virtual keypad as a separate input device, however, you
will see libinput additionally recognize it as a keyboard and volume up/down
keys will be handled. It is for this reason that a handful of drivers with
this kind of mixed functionality (e.g. ad714x) already branch out multiple
input devices for each function.

As a matter of principle, I find it to be most flexible for logically separate
functions to be represented as logically separate input devices, even if those
input devices all stem from the same piece of hardware. Not only does it allow
you to attach different handlers to each device (i.e. file descriptor), but it
also allows user space to inhibit one device but not the other, etc.

Maybe the right approach, which your RFC already seems to support, is to simply
let the driver decide whether to pass the touchscreen input_dev or a different
input_dev. The driver would be responsible for allocating and registering the
keypad; your functions simply set the capabilities for, and report events from,
whichever input_dev is passed to them. This is something to consider for your
st1232 example.

> 
> Thanks a lot for your feedback!
> > 
> > Therefore, my first impression is that the virtual keypad may be better
> > served by registering its own input device.
> > 
> > Great work by the way, Javier!
> > 
> >>
> >> Do you think your solution is general enough to also support this
> >> usecase?
> >>
> >> The differences I see are
> >> * not device-tree based
> >> * touchpads instead of touchscreens
> >>
> >>> [..]
> >>
> >> [0] https://unix.stackexchange.com/q/494400
> > 
> > Kind regards,
> > Jeff LaBundy

Kind regards,
Jeff LaBundy



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux