Hi Greg, On 17.08.20 14:41, gregkh@xxxxxxxxxxxxxxxxxxx wrote: > The patch below was submitted to be applied to the 5.8-stable tree. > > I fail to see how this patch meets the stable kernel rules as found at > Documentation/process/stable-kernel-rules.rst. > > I could be totally wrong, and if so, please respond to > <stable@xxxxxxxxxxxxxxx> and let me know why this patch should be > applied. Otherwise, it is now dropped from my patch queues, never to be > seen again. I tagged it for stable series, because often when successing patches come in, I sometimes need to trigger a backport of such small prior cleanup patches. But your are right, for now it's ok to drop it. Thank you! Helge > thanks, > > greg k-h > > ------------------ original commit in Linus's tree ------------------ > > From 3bc6e3dc5a54d5842938c6f1ed78dd1add379af7 Mon Sep 17 00:00:00 2001 > From: Helge Deller <deller@xxxxxx> > Date: Sun, 14 Jun 2020 10:50:42 +0200 > Subject: [PATCH] parisc: Whitespace cleanups in atomic.h > > Fix whitespace indenting and drop trailing backslashes. > > Cc: <stable@xxxxxxxxxxxxxxx> # 4.19+ > Signed-off-by: Helge Deller <deller@xxxxxx> > > diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h > index 6dd4171c9530..90e8267fc509 100644 > --- a/arch/parisc/include/asm/atomic.h > +++ b/arch/parisc/include/asm/atomic.h > @@ -34,13 +34,13 @@ extern arch_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned; > /* Can't use raw_spin_lock_irq because of #include problems, so > * this is the substitute */ > #define _atomic_spin_lock_irqsave(l,f) do { \ > - arch_spinlock_t *s = ATOMIC_HASH(l); \ > + arch_spinlock_t *s = ATOMIC_HASH(l); \ > local_irq_save(f); \ > arch_spin_lock(s); \ > } while(0) > > #define _atomic_spin_unlock_irqrestore(l,f) do { \ > - arch_spinlock_t *s = ATOMIC_HASH(l); \ > + arch_spinlock_t *s = ATOMIC_HASH(l); \ > arch_spin_unlock(s); \ > local_irq_restore(f); \ > } while(0) > @@ -85,7 +85,7 @@ static __inline__ void atomic_##op(int i, atomic_t *v) \ > _atomic_spin_lock_irqsave(v, flags); \ > v->counter c_op i; \ > _atomic_spin_unlock_irqrestore(v, flags); \ > -} \ > +} > > #define ATOMIC_OP_RETURN(op, c_op) \ > static __inline__ int atomic_##op##_return(int i, atomic_t *v) \ > @@ -150,7 +150,7 @@ static __inline__ void atomic64_##op(s64 i, atomic64_t *v) \ > _atomic_spin_lock_irqsave(v, flags); \ > v->counter c_op i; \ > _atomic_spin_unlock_irqrestore(v, flags); \ > -} \ > +} > > #define ATOMIC64_OP_RETURN(op, c_op) \ > static __inline__ s64 atomic64_##op##_return(s64 i, atomic64_t *v) \ >