Re: [PATCH 3/3] MIPS: fix truncation in __cmpxchg_small for short values

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Paul,

On 12/02/19 9:20 AM, Paul Burton wrote:
Hello,

On Mon, Feb 11, 2019 at 01:37:08PM +0100, Jonas Gorski wrote:
On Mon, 11 Feb 2019 at 05:39, Michael Clark <michaeljclark@xxxxxxx> wrote:
diff --git a/arch/mips/kernel/cmpxchg.c b/arch/mips/kernel/cmpxchg.c
index 0b9535bc2c53..1169958fd748 100644
--- a/arch/mips/kernel/cmpxchg.c
+++ b/arch/mips/kernel/cmpxchg.c
@@ -57,7 +57,7 @@ unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old,
         u32 mask, old32, new32, load32;
         volatile u32 *ptr32;
         unsigned int shift;
-       u8 load;
+       u32 load;

There already is a u32 line above, so maybe move it there.

Also reading the code to understand this, isn't the old code broken
for cmpxchg_small calls for 16-bit variables, where old is > 0xff?

because it does later

         /*
          * Ensure the byte we want to exchange matches the expected
          * old value, and if not then bail.
          */
         load = (load32 & mask) >> shift;
         if (load != old)
             return load;

and if load is a u8, it can never be old if old contains a larger
value than what can fit in a u8.

After re-reading your commit log, it seems you say something similar,
but it wasn't quite obvious for me that this means the function is
basically broken for short values where the old value does not fit in
u8.

So this should have an appropriate "Fixes:" tag. And Cc: stable. Seems
like quite a serious issue to me.

Okay. I was pretty sure it was a bug but I am not sure about the conventions for Linux fixes.

It could be serious if used, though in practice this support was added
to support qrwlock which only really needed 8-bit support at the time.
Since then commit d13316614633 ("locking/qrwlock: Prevent slowpath
writers getting held up by fastpath") removed even that.

Yes. I suspected it was a latent bug. Truncating shorts in compare and swap would could show up with unusual values.

But still, yes it's clearly a nasty little bug if anyone does try to use
a 16-bit cmpxchg() & I've marked it for stable backport whilst applying.

Thanks for fixing this Michael :)

Appreciated. Yes. Thanks for taking the time to verify it, although it is quite an obvious fix it could be a big time suck if one encountered it in the wild as one wouldn't expect a broken intrinsic.

Sorry for the lag in replying. At the time it appeared somewhat like throwing text plus code over the fence, as part of discovery for a submission regarding RISC-V atomics. I had full intention of circling back, just that I am not good with many small emails.

This has prompted me to revise a fair spinlock implementation (that fits into 32-bits)


.. RISC-V tangent...

Related by parent thread. I was looking into ticket spin locks for threads on bare metal, which prompted this patch in the first place.

While the Linux asm-generic ticket spinlock is fair; LR/SC for small word atomics requires significantly more instructions, thus has a cycle penalty for fairness vs amo.add. The problem, however, on RISC-V is that a fair spinlock using amo.add for head and tail would need to be 64-bits in size due to the 32-bit minimum atomic width for atomic ops. For one per CPU structure on a large system, this is significant memory.

A compromise on code size and compactness of structure, would be amoadd.w of 0x0001_0000 for head acquire and LR/SC for tail release. I chose 2^16 because 255 cores seems a bit small present day given folk are fitting more than a thousand RISC-V cores on an FPGA, and one assumes 4096 is quite plausible. Anyhow, here is a 32-bit ticket spinlock with support for 65,535 cores (needs verification):

spin_lock:
    lui     a2,0x10
    amoadd.w a5,a5,(a0)
    srliw   a4,a5,0x10
2:  slliw   a5,a5,0x10
    srliw   a5,a5,0x10
    bne     a5,a4,3f
    ret
3:  lw      a5,0(a0)
    fence   r,rw
    j       2b

spin_trylock:
    lui     a5,0x10
    lr.w.aq a4,(a0)
    addw    a5,a5,a4
    slliw   a3,a5,0x10
    srliw   a3,a3,0x10
    srliw   a4,a5,0x10
    beq     a3,a4,2f
1:  li      a0,0
    ret
2:  sc.w.rl a4,a5,(a0)
    bnez    a4,1b
    fence   r,rw
    li      a0,1
    ret

spin_unlock:
    fence   rw,w
1:  lr.w.aq a4,(a0)
    srliw   a5,a4,0x10
    addiw   a4,a4,1
    slliw   a4,a4,0x10
    srliw   a4,a4,0x10
    slliw   a5,a5,0x10
    or      a5,a5,a4
    sc.w.rl a4,a5,(a0)
    bnez    a5,1b
    ret

We could keep the current simple (but unfair) spinlock. I do suspect unfairness will not scale so whatever is done, it may end up needing to be a config option. I actually am fond of the idea of a RISC-V Atomic Extension V2 in RISC-V V3.0 or V4.0 with 48-bit instructions. A 6-byte instruction would be quite a nice compromise.

It seems that the hybrid approach (above) using amoadd.w for the head, i.e. fast ticket number acquisition followed by spin is logical. This balances the code size penalty for lock/unlock when trying to fit a scalable ticket spinlock into 32-bits If one swaps head and tail, then lock acquisition has a high cost and lock release becomes trivial, which seems wrong. spin_trylock necessarily must use LR/SC as it needs to conditionally acquire a ticket.

This is actually RISC-V generic so I probably should post it somewhere where folk are interested in RISC-V software. I think if we come up with simple lock, it should be usable in BSD or Linux GPL, so consider any of these fragments as public domain. Verification is assumed to be applied. The previous patches where tested in QEMU, but this asm is part compiler emitted and part hand-coded (compiler is not yet smart enough to avoid illegal branches as it doesn't parse LR/SC - that's possibly an arm issue also i.e. other RISC; so just sharing thoughts...).

Michael.


[Index of Archives]     [Linux MIPS Home]     [LKML Archive]     [Linux ARM Kernel]     [Linux ARM]     [Linux]     [Git]     [Yosemite News]     [Linux SCSI]     [Linux Hams]

  Powered by Linux