Hi Paul, Thx for the explanation, here is my comment. On Wed, Mar 31, 2021 at 1:33 PM Paul Campbell <taniwha@xxxxxxxxx> wrote: > > On Wednesday, 31 March 2021 5:18:56 PM NZDT Guo Ren wrote: > > > > [1] > > > > https://github.com/c-sky/csky-linux/commit/e837aad23148542771794d8a2fcc > > > > 52afd0fcbf88> > > > > > > It also seems that the current "amoswap" based implementation > > > > > would be reliable independent of RsrvEventual/RsrvNonEventual. > > > > > > > > Yes, the hardware implementation of AMO could be different from LR/SC. > > > > AMO could use ACE snoop holding to lock the bus in hw coherency > > > > design, but LR/SC uses an exclusive monitor without locking the bus. > > > > > > > > RISC-V hasn't CAS instructions, and it uses LR/SC for cmpxchg. I don't > > > > think LR/SC would be slower than CAS, and CAS is just good for code > > > > size. > > > > > > What I meant here is that the current spinlock uses a simple amoswap, > > > which presumably does not suffer from the lack of forward process you > > > described. > > > > Does that mean we should prevent using LR/SC (if RsrvNonEventual)? > > Let me provide another data-point, I'm working on a high-end highly > speculative implementation with many concurrent instructions in flight - from > my point of view both sorts of AMO (LR/SC and swap/add/etc) require me to > grab a cache line in an exclusive modifiable state (so no difference there). > > More importantly both sorts of AMO instructions (unlike most loads and > stores) can't be speculated (not even LR because it changes hidden state, I > found this out the hard way bringing up the kernel). > > This means that both LR AND SC individually can't be executed until all > speculation is resolved (that means that they happen really late in the > execute path and block the resolution of the speculation of subsequent > instructions) - equally a single amoswap/add/etc instruction can't happen > until late in the execute path - so both require the same cache line state, > but one of these sorts of events is better than two of them. > > Which in short means that amoswap/add/etc is better for small things - small > buzzy lock loops, while LR/SC is better for more complex things with actual > processing between the LR and SC. Seems your machine using the same way to implement LR/SC and AMO, but some machines would differ them. For AMO, I think it's would be like what you've described: - AMO would be separated into three parts: load & lock, ALU operation, store & unlock - load & lock, eg: we could using ACE protocol -SNOOP channel to holding the bus - Doing atomic AMO - store & unlock: Write the result back and releasing the ACE protocol -SNOOP channel I think the above is what you describe as how to "grab a cache line in an exclusive modifiable state". But for LR/SC, it's different. Because we have separated AMO into real three parts of instruction: - LR - Operation instructions - SC If we let LR holding ACE protocol -SNOOP channel and let SC release channel, that would break the ISA design (we couldn't let an instruction holding the snoop bus and made other harts hang up.) So LR/SC would use address monitors for every hart, to detect the target address has been written or not. That means LR/SC won't be implemented fwd progress guarantees. If you care about fwd progress guarantees, I think ISA should choose cmpxchg (eg: cas) instead of LR/SC. > > ---- > > Another issue here is to consider is what happens when you hit one of these > tight spinlocks when the branch target cache is empty and they fail (ie loop > back and try again) - the default branch prediction, and resulting > speculation, is (very) likely to be looping back, while hopefully most locks > are not contended when you hit them and that speculation would be wrong - a > spinlock like this may not be so good: > > li a0, 1 > loop: > amoswap a1, a0, (a2) > beqz a1, loop > ..... subsequent code > > In my world with no BTC info the pipe fills with dozens of amoswaps, rather > than the 'subsequent code'. While (in my world) code like this: > > li a0, 1 > loop: > amoswap a1, a0, (a2) > bnez a1, 1f > .... subsequent code > > 1: j loop > > would actually be better (in my world unconditional jump instructions are > folded early and never see execution so they're sort of free, though they mess > with the issue/decode rate). Smart compilers could move the "j loop" out of > the way, while the double branch on failure is not a big deal since either the > lock is still held (and you don't care if it's slow) or it's been released in > which case the cache line has been stolen and the refetch of that cache line > is going to dominate the next time around the loop Thx for sharing the view of the spinlock speculative path. But I think we should use smp_cond_load_acquire not looping. That means we could use wfe/cpu_relax to let other harts utilized the core's pipeline. So we needn't optimize the "subsequent code" speculative path in the multi-threads processing core and just let the hart relax. > > I need to stress here that this is how my architecture works, other's will of > course be different though I expect that other heavily speculative > architectures to have similar issues :-) > > Paul Campbell > Moonbase Otago > > > -- Best Regards Guo Ren ML: https://lore.kernel.org/linux-csky/