Re: [PATCH 7/8] gpiolib: add new ioctl() for monitoring changes in line info

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



pon., 2 gru 2019 o 00:43 Kent Gibson <warthog618@xxxxxxxxx> napisał(a):
>

[snip!]

> > > >
> > > > How about reusing the already existing file descriptor associated with
> > > > the chip itself? We currently only implement the ioctl() operation on
> > > > it - the poll() and read() callbacks are empty.
> > > >
> > > > We'd need to add two new ioctls(): GPIOLINE_WATCH_IOCTL and
> > > > GPIOLINE_UNWATCH_IOCTL. The structures for both would look like this:
> > > >
> > > > struct gpioline_watch_request {
> > > >     __u32 lineoffset
> > > >     struct gpioline_info info;
> > > > };
> > > >
> > > > struct gpioline_unwatch_request {
> > > >     __u32 lineoffset;
> > > > };
> > > >
> > > > When GPIOLINE_WATCH_IOCTL is called, we'd setup a watch for given
> > > > line: the embedded gpioline_info structure would be filled with
> > > > initial info and we can now poll the gpiochip descriptor for events
> > > > and read them. The event structure would looks like this:
> > > >
> > > > struct gpioline_changed {
> > > >     __u32 event_type;
> > > >     __u64 timestamp;
> > > >     struct gpioline_info info;
> > > > };
> > > >
> > > > Calling GPIOLINE_UNWATCH_IOCTL would of course make the kernel stop
> > > > emitting events for given line.
> > > >
> > > > Does it make sense?
> > > >
> > >
> > > That makes sense.  But it doesn't really address the underlying problem
> > > that you have identified - it just makes it less likely that you will
> > > fill the kfifo.
> > >
> > > Correct me if I'm wrong, but a pathological process or processes could
> > > still swamp your kfifo with events, particularly if they are operating
> > > on bulks.
> >
> > Don't get me wrong - the assumption is that a process knows what it's
> > doing. We expect that if it watches lines for events, then it will
> > actually read them as soon as they arrive on the other end of the
> > FIFO. If not - then this won't affect others, it will fill up the FIFO
> > associated with this process' file descriptor and we'll just drop new
> > events until it consumes old ones. In other words: I'm not worried
> > about pathological processes.
> >
>
> The reader can't guarantee that it can read faster than changes can occur,
> no matter how well intentioned it is.
>
> I am a bit worried if you just drop events, as there is no indication to
> userspace that that has occured.

This is what happens now with line events anyway. I added a patch to
the v2 of this series that adds a ratelimited debug message when the
kfifo is full. At least that will leave a trace in the kernel log.
Unfortunately there's no other way than limiting the FIFO's size -
otherwise a malicious process could hog all the memory by not reading
events.

>
> > The problem here is that the file descriptor is created and there are
> > already several (up to 64) events waiting to be read. This just
> > doesn't feel right. If the process doesn't manage to consume all
> > initial events in time, we'll drop new ones. The alternative is to
> > allocate a larger FIFO but I just have a feeling that this whole
> > approach is wrong. I'm not sure any other subsystem does something
> > like this.
> >
> > >
> > > I'd be happier with a solution that addresses what happens when the
> > > kfifo is full, or even better prevents it from filling, and then see
> > > how that feeds back to the startup case.
> > >
> >
> > You can't really prevent it from overflowing as you can't
> > update/modify elements in the middle.
> >
>
> You can if you let go of the idea of adding something to the fifo for
> every change.  If you know the change you want to signal is already in
> the fifo then you don't need to add it again.
>
> The idea I'm suggesting now is for the fifo to contain "your info on
> line X is stale" messages.  If that hasn't been ACKed by userspace then
> there is no point adding another for that line.  So worst case you have
> num_lines messages in the fifo at any time.

I see. But in this case I'm not sure a file descriptor is the right
interface. When POLLIN events are detected by poll() called on an fd -
it means there's data to read on the file descriptor: there's data
already in the FIFO waiting to be consumed by the user-space.

Let's imagine the following situation: we detect one of the conditions
for emitting the event in the kernel, we set the "needs_update" flag,
we then wake up the polling process, but it calls the LINE_INFO
ioctl() on the changed line without ever reading the event from the
fd. What would happen now? Does the unread event disappear from the fd
because the user "acked" the event? What about ordering of events when
line X gets updated, then line Y, then X again but the process didn't
read the first event?

IIRC the way the line events are handled in sysfs (polling
'gpioXYZ/value', while 'gpioXYZ/value' doesn't work as a FIFO) was
criticized for its unreliability and was one of the reasons for
developing the chardev.

I would be much happier with your previous proposal: getting line_info
when setting the watch and then getting it all again every time the
status changes. We also get the "history" of changes that way.

Bart

>
> > > Time to revisit your requirements...  Userspace doesn't need the details
> > > of each change, all you are after is an asynchronous mechanism to
> > > bring userspace up to date.  You only need to inform userspace that the
> > > line has changed since it was last informed of a change (or the fd was
> > > created).  You can collapse a set of changes that haven't reached
> > > userspace down to the last one.  Then the outstanding messages for
> > > userspace is at worst the number of lines being monitored.
> > >
> > > Wrt how to implement that, how about this:
> > > When a line is changed you set a "needs_update" flag on that line, and
> > > if not already set then issue a change to userspace via the fd.  That only
> > > contains the line offset - no info.
> >
> > Doesn't this bring us right back to my racy initial implementation?
> >
>
> It is closer to your original, but the race is only relevant when you
> are trying to match line info with changed events.  I'm suggesting we
> give up trying to do that.  It was probably a mistake to go down that
> particular rat hole - my bad.
>
> All we do here is notify to userspace that their info is stale.
> They ACK that by performing a LINEINFO ioctl on the monitoring fd (it
> can't be the chip fd as we are introducing side effects - the ACK).
> When the kernel sends the response to the LINEINFO ioctl we know the
> kernel and userspace are in sync at that time.  And isn't that what you
> want to achieve - bringing userspace back into sync with the kernel
> without having to poll?
>
> > > The userspace would then perform a LINEINFO ioctl to read the line state
> > > and clear the "needs_update" flag in kernel.
> > > At that point the kernel and userspace are back in sync for that line -
> > > until the next change.
> > > The LINEINFO ioctl would be on the monitor fd, not the chip fd, but
> > > could otherwise be the same as the existing ioctl from the userspace pov.
> > > This logic would apply for each monitoring fd which is monitoring that
> > > line.
> >
> > But then the kernel would assume that the user-space does a
> > responsible thing with the event it reads, while the process may as
> > well just discard all arriving data.
> >
>
> The kernel doesn't assume anything - it just wont send another changed
> event for a line until the userspace ACKs the previous.
> If userspace doesn't ACK a changed event then they will not get any
> more for that line, so they have a strong incentive to ACK it.
> Note that the "your info is stale" message only contains the line
> offset so the userspace is forced to perform the LINEINFO ioctl to find
> out that the current state actually is.
>
> Kent.
>
> > Bart
> >
> > >
> > > Wrt the startup case, the userspace would create the monitor fd and then
> > > call the LINEINFO ioctl on each monitored line.
> > > Then you are in sync and good to go...
> > >
> > > There may well be better ways to implement that handshake - that is just
> > > the first that comes to mind.
> > >
> > > Kent.
> > >
> > > > Bart
> > > >
> > > > >
> > > > > Kent.
> > > > >
> > > > > > Bart
> > > > > >
> > > > > > >
> > > > > > > I'm used to setting up streams like this in a networked environment
> > > > > > > where returning data via the stream setup isn't an option, the only
> > > > > > > option is to sync via the stream itself, so I overlooked the possibility
> > > > > > > of using the ioctl return.
> > > > > > >
> > > > > > > > > The name is immutable(??), and so is pointless to include.
> > > > > > > >
> > > > > > > > It is now, but let's be future-proof. I can imagine having modifiable
> > > > > > > > line names in the future. The code putting this info in struct
> > > > > > > > gpioline_info wouldn't be really duplicated and the size of such small
> > > > > > > > structures doesn't matter much - it's still a single context switch to
> > > > > > > > read it.
> > > > > > > >
> > > > > > > > > The consumer only changes when the line is requested, so I was willing
> > > > > > > > > to live with still having to poll for that.
> > > > > > > > > And what you gain by reporting bulk lines you might lose in increased
> > > > > > > > > report size and associated code.  OTOH it would make it explicit which
> > > > > > > > > lines are being changed together...
> > > > > > > > > So I could well be wrong on that - a full bulk report may be better.
> > > > > > > >
> > > > > > > > I'm not sure we need bulk reporting - just as we don't provide bulk
> > > > > > > > GET_GPIOLINE_INFO. The corresponding ioctl() structure would get
> > > > > > > > pretty complicated soon.
> > > > > > > >
> > > > > > >
> > > > > > > That was my initial feeling as well.
> > > > > > > And bulk is no longer an option if you want to include name in the change
> > > > > > > report.
> > > > > > >
> > > > > > > Kent.
> > > > > > >
> > > > > > > > Bartosz
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Kent.
> > > > > > > > >
> > > > > > > > > > > And a timestamp might be useful, as per gpioevent_data?
> > > > > > > > > >
> > > > > > > > > > Sure thing!
> > > > > > > > > >
> > > > > > > > > > Bart
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Kent.
> > > > > > > > > > >
> > > > > > > > > > > >  /* Linerequest flags */
> > > > > > > > > > > >  #define GPIOHANDLE_REQUEST_INPUT     (1UL << 0)
> > > > > > > > > > > >  #define GPIOHANDLE_REQUEST_OUTPUT    (1UL << 1)
> > > > > > > > > > > > @@ -176,6 +210,8 @@ struct gpioevent_data {
> > > > > > > > > > > >
> > > > > > > > > > > >  #define GPIO_GET_CHIPINFO_IOCTL _IOR(0xB4, 0x01, struct gpiochip_info)
> > > > > > > > > > > >  #define GPIO_GET_LINEINFO_IOCTL _IOWR(0xB4, 0x02, struct gpioline_info)
> > > > > > > > > > > > +#define GPIO_GET_LINECHANGED_FD_IOCTL \
> > > > > > > > > > > > +             _IOWR(0xB4, 0x0b, struct gpioline_changed_fd_request)
> > > > > > > > > > > >  #define GPIO_GET_LINEHANDLE_IOCTL _IOWR(0xB4, 0x03, struct gpiohandle_request)
> > > > > > > > > > > >  #define GPIO_GET_LINEEVENT_IOCTL _IOWR(0xB4, 0x04, struct gpioevent_request)
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > 2.23.0
> > > > > > > > > > > >




[Index of Archives]     [Linux SPI]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux