Re: regression: gpiolib: switch the line state notifier to atomic unexpected impact on performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 12, 2025 at 09:08:29AM +0100, David Jander wrote:
> On Wed, 12 Mar 2025 09:32:56 +0800
> Kent Gibson <warthog618@xxxxxxxxx> wrote:
>
> > On Tue, Mar 11, 2025 at 12:03:46PM +0100, David Jander wrote:
> > >
> > > Indeed, it does. My application is written in python and uses the python gpiod
> > > module. Even in such an environment the impact is killing.
> >
> > Interesting - the only reason I could think of for an application
> > requesting/releasing GPIOs at a high rate was it if was built on top of
> > the libgpiod tools and so was unable to hold the request fd.
>

Btw, I'm not suggesting that anyone build an app on top of the libgpiod
tools - I was just hunting for an explanation as to why anyone might be
opening and closing chips or requests at a high rate.

> I didn't want to bother the list with the details, but this is during the
> configuration phase of the application.

The fact that close() was slow is valid but it left me wondering why you
were needing to do that so frequently.
It helps to understand what you are doing and why to see if there are
other better solutions - or it there should be.

> It receives many configuration messages
> for different IO objects at a fast pace. Most of those objects use one or more
> GPIO lines identified by their label. So the application calls
> gpiod.find_line(label) on each of them. Apparently libgiod (version 1.6.3 in
> our case) isn't very efficient, since it will open and close each of the
> gpiodev devices in order to query for each of the gpio lines. I wouldn't blame
> libgpiod (python bindings) for doing it that way, since open()/close() of a
> chardev are expected to be fast, and caching this information is probably
> error prone anyway, since AFAIK user space cannot yet be informed of changes
> to gpio chips from kernel space.
>

Ok, if the issue is purely the name -> (chip,offset) mapping it is pretty
safe to assume that line names are immutable - though not unique, so
caching the mapping should be fine.

The kernel can already tell userspace about a number of changes.
What changes are you concerned about - adding/removing chips?

> If this had been this slow always (even before 6.13), I would probably have
> done things a bit differently and cached the config requests to then "find"
> the lines in batches directly working on the character devices instead of
> using gpiod, so I could open/close each one just once for finding many
> different lines each time.
>

The libgpiod v2 tools do just that - they scan the chips once rather
than once per line.  But that functionality is not exposed in the
libgpiod v2 API as the C interface is hideous and it is difficult to
provide well defined behaviour (e.g. in what order are the chips scanned?).
So it is left to the application to determine how they want to do it.
There isn't even a find_line() equivalent in v2, IIRC.

> > Generally an application should request the lines it requires once and hold
> > them for the duration.  Similarly functions such as find_line() should be
> > performed once per line.
>
> Of course it does that ;-)
> This board has a large amount of GPIO lines, and like I said, it is during the
> initial configuration phase of the application that I am seeing this problem.
>

Good to hear - from your earlier description I was concerned that
you might be doing it continuously.

Cheers,
Kent.





[Index of Archives]     [Linux SPI]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux