Hello, On Mon, Feb 11, 2019 at 01:37:08PM +0100, Jonas Gorski wrote: > On Mon, 11 Feb 2019 at 05:39, Michael Clark <michaeljclark@xxxxxxx> wrote: > > diff --git a/arch/mips/kernel/cmpxchg.c b/arch/mips/kernel/cmpxchg.c > > index 0b9535bc2c53..1169958fd748 100644 > > --- a/arch/mips/kernel/cmpxchg.c > > +++ b/arch/mips/kernel/cmpxchg.c > > @@ -57,7 +57,7 @@ unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old, > > u32 mask, old32, new32, load32; > > volatile u32 *ptr32; > > unsigned int shift; > > - u8 load; > > + u32 load; > > There already is a u32 line above, so maybe move it there. > > Also reading the code to understand this, isn't the old code broken > for cmpxchg_small calls for 16-bit variables, where old is > 0xff? > > because it does later > > /* > * Ensure the byte we want to exchange matches the expected > * old value, and if not then bail. > */ > load = (load32 & mask) >> shift; > if (load != old) > return load; > > and if load is a u8, it can never be old if old contains a larger > value than what can fit in a u8. > > After re-reading your commit log, it seems you say something similar, > but it wasn't quite obvious for me that this means the function is > basically broken for short values where the old value does not fit in > u8. > > So this should have an appropriate "Fixes:" tag. And Cc: stable. Seems > like quite a serious issue to me. It could be serious if used, though in practice this support was added to support qrwlock which only really needed 8-bit support at the time. Since then commit d13316614633 ("locking/qrwlock: Prevent slowpath writers getting held up by fastpath") removed even that. But still, yes it's clearly a nasty little bug if anyone does try to use a 16-bit cmpxchg() & I've marked it for stable backport whilst applying. Thanks for fixing this Michael :) Paul