Re: GPIOLIB locking is broken and how to fix it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Dec 09, 2023 at 08:24:59PM +0100, Bartosz Golaszewski wrote:
> On Sat, Dec 9, 2023 at 2:56 AM Kent Gibson <warthog618@xxxxxxxxx> wrote:
> >
> > On Fri, Dec 08, 2023 at 07:54:40PM +0100, Bartosz Golaszewski wrote:
> > > On Fri, Dec 8, 2023 at 11:27 AM Kent Gibson <warthog618@xxxxxxxxx> wrote:
> > > >
> > > > On Fri, Dec 08, 2023 at 10:52:09AM +0100, Bartosz Golaszewski wrote:
> > > > > On Fri, Dec 8, 2023 at 9:38 AM Kent Gibson <warthog618@xxxxxxxxx> wrote:
> > > > > >
> > > > > > On Fri, Dec 08, 2023 at 09:13:17AM +0100, Bartosz Golaszewski wrote:
> > > > > > > On Fri, Dec 8, 2023 at 2:01 AM Kent Gibson <warthog618@xxxxxxxxx> wrote:
> > > > > > > >
> > > > > > > > On Thu, Dec 07, 2023 at 07:37:54PM +0100, Bartosz Golaszewski wrote:
> > > > > > > > > On Tue, Nov 28, 2023 at 11:47 AM Bartosz Golaszewski <brgl@xxxxxxxx> wrote:
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > [snip]
> > > > > > >
> > > > > >
> > > > > > Yeah, no need to risk other GPIO users messing with it if it is only there
> > > > > > for cdev.
> > > > > > Want me to take a look at it or are you happy to take care of it?
> > > > > >
> > > > >
> > > > > If you'll find the time to do it in the following days then sure, go
> > > > > ahead, otherwise, I'll have some pare cycles today and next week to
> > > > > spend on it.
> > > > >
> > > >
> > > > It would probably take me longer than that to context switch, so go for
> > > > it.
> > > >
> > >
> > > Well I went for it and it turns out to be quite tricky. We have
> > > linereq and gpio_chardev_data that have independent lifetimes and the
> > > only resource they share is the gpio_device and - by extension - gpio
> > > descriptors . If we want to store some additional data locally within
> > > the context of gpiolib-cdev.c then I see the following options:
> > >
> >
> > Well that probably explains why putting it in the desc made so much
> > sense at the time.
> >
> > Lets take a look at the code...
> >
> > I had thought it could be moved to struct line (contained within
> > struct linereq), so basically replacing line->desc->debounce_period_us
> > with line->debounce_period_us.  That almost works, but the problem there
> > is that gpio_desc_to_lineinfo() returns the debounce period in the line
> > info - and there is no way to access the linereq/line from the desc...
> >
> > Ah, so the lineinfo_get/_v1() functions that call
> > gpio_desc_to_lineinfo() also have the gpio_chardev_data to work with -
> > now I see where you are coming from.
> > (Debounce is not relevant for v1, so that reduces the problem to
> > lineinfo_get().)
> >
> > > 1. Global radix_tree containing additional configuration
> > > (debounce_period_us for now) looked up by the descriptor's address.
> > > Lookup can be done locklessly using RCU and from atomic context
> > > (interrupt handlers in cdev code).
> > >
> >
> > The irq handlers are already protected from changes to debounce period.
> > It is either set before the irqs are enabled (in the request), or the
> > irq is disabled, debounce updated, and irq re-enabled (in set_config).
> >

Having had a closer look at the code and strictly speaking, one of the
corner cases (updating the period for sw debouncing) doesn't do the
disable/enable but lives with the possibility of the debouncer using a
stale value for one debounce period (as that is much simpler).

But the end result is the same - the debounce period doesn't require
additional locking.

> > > 2. Reference counted wrapper around descriptors. It would look something like:
> > >
> > > struct gpio_cdev_desc {
> > >     struct kref ref;
> > >     struct gpio_desc *desc;
> > >     unsigned int debounce_period_us;
> > > };
> > >
> > > And then struct gpio_chardev_data would store an array of pointers to
> > > this wrapper while struct line would store a pointer to it instead of
> > > directly referencing struct gpio_desc.
> > >
> > > Any other ideas?
> > >
> >
> > I still think the primary location for any additional line config is in
> > struct line - that makes it clear and simple for the majority of cdev and
> > matches the lifetimes of the accessors (other than lineinfo_get()).
> >
> > The only issue being how to access that info from lineinfo_get().
> > I guess we can't stop reporting the debounce in the line info :-(.
> >
> > Rather than worry about an linkage between gpio_chardev_data and the
> > linereq/line, I would consider a separate copy of this supplemental line
> > info for the chardev, possibly using a radix_tree as you suggest.
> > That would be updated by the linereq to match its value in struct line.
> > So basically Option 1 but for a shadow copy of the additional info.
> >
>
> We still need to connect linereq with its "parent" gpio_chardev_data
> somehow and make this link weak so that it can survive one or the
> other being destroyed. Maybe a notifier in linereq to which
> gpio_chardev_data would subscribe? It would send out notifications on
> changes to debounce_period which gpio_chardev_data could store. When
> linereq goes out of scope it sends a corresponding notification
> allowing gpio_chardev_data to unsubscribe before linereq is freed,
> while when gpio_chardev_data goes out of scope first, it unsubscribes
> when being released.
>

No, there has to be a link between both and the supplemental info.
For gpio_chardev_data that is to create lineinfo, and for the linereq it
is to keep the value reported in lineinfo mirroring the current value.
Below I suggested making the supplemental info a reference counted
object, with chip scope, referenced by both gpio_chardev_data and the
linereq. So last one out turns off the lights.

Having the shadow copy allows most usage to avoid the tree lookup and any
associated locking (assuming the tree isn't inherently thread safe and
requires a spinlock to prevent modification during a lookup).
It is only populating the lineinfo or updating the value that would
require the lookup, and neither are a hot path (if there is such a thing
in cdev).

Hmmm, the radix_tree allocates a page of entries at a time, which might
be a bit of overkill per-chip, so maybe a global is the way to go?
Or something other than a radix_tree, say a rbtree?

All this is getting rather complicated just to relocate one field, so I'm
starting to reconsider whether the desc was the right place for it after
all. ¯\_(ツ)_/¯

OTOH, I've partially coded my suggestion, to the point of skeletons for
the supplemental info, so if you like I'm happy to finish that off and
provide patches.  Though what remains is probably 90% of the work...

Cheers,
Kent.

> Bart
>
> > I'm not a fan of globals, so would go per chip and indexed by line
> > offset rather than desc. But then, given the lifetime issues at play, that
> > would have to be in a reference counted object that both gpio_chardev_data
> > and linereq would reference.  So swings and roundabouts.
> >
> > Does that make sense? Anyway, that's my $/50.
> >
> > Cheers,
> > Kent.
> >




[Index of Archives]     [Linux SPI]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux