On Fri, Dec 08, 2023 at 07:54:40PM +0100, Bartosz Golaszewski wrote: > On Fri, Dec 8, 2023 at 11:27 AM Kent Gibson <warthog618@xxxxxxxxx> wrote: > > > > On Fri, Dec 08, 2023 at 10:52:09AM +0100, Bartosz Golaszewski wrote: > > > On Fri, Dec 8, 2023 at 9:38 AM Kent Gibson <warthog618@xxxxxxxxx> wrote: > > > > > > > > On Fri, Dec 08, 2023 at 09:13:17AM +0100, Bartosz Golaszewski wrote: > > > > > On Fri, Dec 8, 2023 at 2:01 AM Kent Gibson <warthog618@xxxxxxxxx> wrote: > > > > > > > > > > > > On Thu, Dec 07, 2023 at 07:37:54PM +0100, Bartosz Golaszewski wrote: > > > > > > > On Tue, Nov 28, 2023 at 11:47 AM Bartosz Golaszewski <brgl@xxxxxxxx> wrote: > > > > > > > > > > > > > > > > > > > > > > [snip] > > > > > > > > > > > > > Yeah, no need to risk other GPIO users messing with it if it is only there > > > > for cdev. > > > > Want me to take a look at it or are you happy to take care of it? > > > > > > > > > > If you'll find the time to do it in the following days then sure, go > > > ahead, otherwise, I'll have some pare cycles today and next week to > > > spend on it. > > > > > > > It would probably take me longer than that to context switch, so go for > > it. > > > > Well I went for it and it turns out to be quite tricky. We have > linereq and gpio_chardev_data that have independent lifetimes and the > only resource they share is the gpio_device and - by extension - gpio > descriptors . If we want to store some additional data locally within > the context of gpiolib-cdev.c then I see the following options: > Well that probably explains why putting it in the desc made so much sense at the time. Lets take a look at the code... I had thought it could be moved to struct line (contained within struct linereq), so basically replacing line->desc->debounce_period_us with line->debounce_period_us. That almost works, but the problem there is that gpio_desc_to_lineinfo() returns the debounce period in the line info - and there is no way to access the linereq/line from the desc... Ah, so the lineinfo_get/_v1() functions that call gpio_desc_to_lineinfo() also have the gpio_chardev_data to work with - now I see where you are coming from. (Debounce is not relevant for v1, so that reduces the problem to lineinfo_get().) > 1. Global radix_tree containing additional configuration > (debounce_period_us for now) looked up by the descriptor's address. > Lookup can be done locklessly using RCU and from atomic context > (interrupt handlers in cdev code). > The irq handlers are already protected from changes to debounce period. It is either set before the irqs are enabled (in the request), or the irq is disabled, debounce updated, and irq re-enabled (in set_config). > 2. Reference counted wrapper around descriptors. It would look something like: > > struct gpio_cdev_desc { > struct kref ref; > struct gpio_desc *desc; > unsigned int debounce_period_us; > }; > > And then struct gpio_chardev_data would store an array of pointers to > this wrapper while struct line would store a pointer to it instead of > directly referencing struct gpio_desc. > > Any other ideas? > I still think the primary location for any additional line config is in struct line - that makes it clear and simple for the majority of cdev and matches the lifetimes of the accessors (other than lineinfo_get()). The only issue being how to access that info from lineinfo_get(). I guess we can't stop reporting the debounce in the line info :-(. Rather than worry about an linkage between gpio_chardev_data and the linereq/line, I would consider a separate copy of this supplemental line info for the chardev, possibly using a radix_tree as you suggest. That would be updated by the linereq to match its value in struct line. So basically Option 1 but for a shadow copy of the additional info. I'm not a fan of globals, so would go per chip and indexed by line offset rather than desc. But then, given the lifetime issues at play, that would have to be in a reference counted object that both gpio_chardev_data and linereq would reference. So swings and roundabouts. Does that make sense? Anyway, that's my $/50. Cheers, Kent.