Hi, Just bumped across this patch and have a query. On 03/16/2012 04:05 PM, Tarun Kanti DebBarma wrote: > There is no more need to have saved_wakeup because bank->context.wake_en > already holds that value. So getting rid of read/write operation associated > with this field. > > Signed-off-by: Tarun Kanti DebBarma <tarun.kanti@xxxxxx> > Reviewed-by: Santosh Shilimkar <santosh.shilimkar@xxxxxx> > Acked-by: Felipe Balbi <balbi@xxxxxx> > --- > drivers/gpio/gpio-omap.c | 12 +++--------- > 1 files changed, 3 insertions(+), 9 deletions(-) > > diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c > index 3a4f151..3b91ade 100644 > --- a/drivers/gpio/gpio-omap.c > +++ b/drivers/gpio/gpio-omap.c > @@ -57,7 +57,6 @@ struct gpio_bank { > u16 irq; > int irq_base; > struct irq_domain *domain; > - u32 saved_wakeup; > u32 non_wakeup_gpios; > u32 enabled_non_wakeup_gpios; > struct gpio_regs context; > @@ -777,7 +776,6 @@ static int omap_mpuio_suspend_noirq(struct device *dev) > unsigned long flags; > > spin_lock_irqsave(&bank->lock, flags); > - bank->saved_wakeup = __raw_readl(mask_reg); > __raw_writel(0xffff & ~bank->context.wake_en, mask_reg); OK, here you are overwriting the mask_reg with the wakeup bitmask without saving the mask_reg's original content. > spin_unlock_irqrestore(&bank->lock, flags); > > @@ -793,7 +791,7 @@ static int omap_mpuio_resume_noirq(struct device *dev) > unsigned long flags; > > spin_lock_irqsave(&bank->lock, flags); > - __raw_writel(bank->saved_wakeup, mask_reg); > + __raw_writel(bank->context.wake_en, mask_reg); Now you are restoring nothing but the same content that you stored during suspend. This will cause the non-wakeup gpio interrupts to get masked between a suspend/resume. So isn't this a bug? Proper solution would be to save the mask_reg context into another register than context.wake_en during suspend. > spin_unlock_irqrestore(&bank->lock, flags); > > return 0; cheers, -roger -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html