Hi Tony & Grygorii, On Wed, Mar 22, 2017 at 10:17:14AM -0700, Tony Lindgren wrote: > * Ladislav Michl <ladis@xxxxxxxxxxxxxx> [170320 17:19]: > > On Mon, Mar 20, 2017 at 04:16:33PM -0500, Grygorii Strashko wrote: > > > > > > > > > On 03/20/2017 06:21 AM, Ladislav Michl wrote: > > > > On Thu, Mar 16, 2017 at 12:17:45PM -0700, Tony Lindgren wrote: > > > >> Hmm maybe we need to flush posted writes when re-enabling the GPIO interrupts? > > > >> > > > >> Below is an untested patch that might help if that's the case. > > > > > > > > Unfortunately that's not the case. Even writing set&clear version of > > > > interrupt demux handler did not make it any better. And idea from > > > > ancient patch I initially sent is not easily extensible when we > > > > need to trigger interrupt on both edges. So far I'm clueless... > > > > > > So, just to be sure - can you reproduce it with LKML? > > > > Do you mean mainline kernel? I'm on 4.11-rc3 now... > > > > > As per code, the possible problem could be with double acking of edge irqs > > > (theoretically): > > > - omap_gpio_irq_handler > > > - "isr" = read irq status > > > - omap_clear_gpio_irqbank(bank, isr_saved & ~level_mask); --- clear edge status, so new irq can be accepted > > > - loop while "isr" > > > generic_handle_irq() > > > - handle_edge_irq() > > > - desc->irq_data.chip->irq_ack(&desc->irq_data); > > > - omap_gpio_ack_irq() > > > it might be that at this moment edge IRQ was triggered again and it will be cleared. > > > > > > just as an experiment, could you try to update omap_gpio_ack_irq() > > > as below: > > > > > > if (irq type is not edge) > > > omap_clear_gpio_irqstatus(bank, offset); > > > > Rewritten as: > > > > diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c > > index efc85a279d54..9381763e1fec 100644 > > --- a/drivers/gpio/gpio-omap.c > > +++ b/drivers/gpio/gpio-omap.c > > @@ -806,7 +806,8 @@ static void omap_gpio_ack_irq(struct irq_data *d) > > struct gpio_bank *bank = omap_irq_data_get_bank(d); > > unsigned offset = d->hwirq; > > > > - omap_clear_gpio_irqstatus(bank, offset); > > + if (bank->level_mask & BIT(offset)) > > + omap_clear_gpio_irqstatus(bank, offset); > > } > > > > static void omap_gpio_mask_irq(struct irq_data *d) > > > > And that did the trick. So far I tried IR decoder and decoding sometimes > > fails, but I cannot say anything certain until I check with scope (which > > I do tomorrow) > > Another good code review catch by Grygorii! :) So, to add more info to my previous emails... For testing above patch I setup tftp server and nfs root and that did the trick. As both (tested) edge triggered GPIO and ethernet interrupt line are connected to the same GPIO bank it seemed as an improvement. So I changed to initramfs for tests. See bellow... > Can you guys please add a comment there to the code when posting the > proper patch? Something like: > > /* > * Edge GPIOs are already cleared during handling, clearing > * them here again will cause lost interrupts. > */ > > And also add a proper Fixes tag so this gets propagated to the > stable kernels. It really seems we've had this for a long time. Above probably still applies, but it is a bit hard to trigger. I'll setup a test for that and provide reports. Edge triggered interrupts are lost no matter how fast is its source. It happens even at 10Hz and patch bellow fixes issue. Of course, it is just a hack showing where problem is and it will need a bit more debugging. Thank you, ladis diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c index efc85a279d54..f6398ab7b75a 100644 --- a/drivers/gpio/gpio-omap.c +++ b/drivers/gpio/gpio-omap.c @@ -1273,6 +1273,7 @@ static int omap_gpio_runtime_suspend(struct device *dev) unsigned long flags; u32 wake_low, wake_hi; + return -EINVAL; raw_spin_lock_irqsave(&bank->lock, flags); /* @@ -1341,6 +1342,7 @@ static int omap_gpio_runtime_resume(struct device *dev) unsigned long flags; int c; + return -EINVAL; raw_spin_lock_irqsave(&bank->lock, flags); /* -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html