RE: [Linuxarm] Re: [PATCH for next v1 0/2] gpio: few clean up patches to replace spin_lock_irqsave with spin_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Andy Shevchenko [mailto:andy.shevchenko@xxxxxxxxx]
> Sent: Thursday, February 11, 2021 3:57 AM
> To: Song Bao Hua (Barry Song) <song.bao.hua@xxxxxxxxxxxxx>
> Cc: luojiaxing <luojiaxing@xxxxxxxxxx>; Linus Walleij
> <linus.walleij@xxxxxxxxxx>; Grygorii Strashko <grygorii.strashko@xxxxxx>;
> Santosh Shilimkar <ssantosh@xxxxxxxxxx>; Kevin Hilman <khilman@xxxxxxxxxx>;
> open list:GPIO SUBSYSTEM <linux-gpio@xxxxxxxxxxxxxxx>; Linux Kernel Mailing
> List <linux-kernel@xxxxxxxxxxxxxxx>; linuxarm@xxxxxxxxxxxxx
> Subject: Re: [Linuxarm] Re: [PATCH for next v1 0/2] gpio: few clean up patches
> to replace spin_lock_irqsave with spin_lock
> 
> On Wed, Feb 10, 2021 at 11:50:45AM +0000, Song Bao Hua (Barry Song) wrote:
> > > -----Original Message-----
> > > From: Andy Shevchenko [mailto:andy.shevchenko@xxxxxxxxx]
> > > Sent: Wednesday, February 10, 2021 11:51 PM
> > > On Wed, Feb 10, 2021 at 5:43 AM luojiaxing <luojiaxing@xxxxxxxxxx> wrote:
> > > > On 2021/2/9 17:42, Andy Shevchenko wrote:
> 
> ...
> 
> > > > Between IRQ handler A and IRQ handle A, it's no need for a SLIS.
> > >
> > > Right, but it's not the case in the patches you provided.
> >
> > The code still holds spin_lock. So if two cpus call same IRQ handler,
> > spin_lock makes them spin; and if interrupts are threaded, spin_lock
> > makes two threads run the same handler one by one.
> 
> If you run on an SMP system and it happens that spin_lock_irqsave() just
> immediately after spin_unlock(), you will get into the troubles. Am I mistaken?

Hi Andy,
Thanks for your reply.

But I don't agree spin_lock_irqsave() just immediately after spin_unlock()
could a problem on SMP.
When the 1st cpu releases spinlock by spin_unlock, it has completed its section
of accessing the critical data, then 2nd cpu gets the spin_lock. These two CPUs
won't have overlap on accessing the same data.

> 
> I think this entire activity is a carefully crafted mine field for the future
> syzcaller and fuzzers alike. I don't believe there are no side effects in a
> long
> term on all possible systems and configurations (including forced threaded IRQ
> handlers).

Also I don't understand why forced threaded IRQ could be a problem. Since IRQ has
been a thread, this actually makes the situation much simpler than non-threaded
IRQ. Since all threads including those IRQ threads want to hold spin_lock,
they won't access the same critical data at the same time either.

> 
> I would love to see a better explanation in the commit message of such patches
> which makes it clear that there are *no* side effects.
> 

People had the same questions before, But I guess the discussion in this commit
has led to a better commit log:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4eb7d0cd59

> For time being, NAK to the all patches of this kind.

Fair enough, if you expect better explanation, I agree the commit log is too
short.

> 
> --
> With Best Regards,
> Andy Shevchenko
> 

Thanks
Barry




[Index of Archives]     [Linux SPI]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux