From: Guo Ren <guoren@xxxxxxxxxxxxxxxxx> Current riscv is still using baby spinlock implementation. It'll cause fairness and cache line bouncing problems. Many people are involved and pay the efforts to improve it: - The first version of patch was made in 2019.1: https://lore.kernel.org/linux-riscv/20190211043829.30096-1-michaeljclark@xxxxxxx/#r - The second version was made in 2020.11: https://lore.kernel.org/linux-riscv/1606225437-22948-2-git-send-email-guoren@xxxxxxxxxx/ - A good discussion at Platform HSC.2021-03-08: https://drive.google.com/drive/folders/1ooqdnIsYx7XKor5O1XTtM6D1CHp4hc0p Hope your comments and Tested-by or Co-developed-by or Reviewed-by ... Let's kick the qspinlock into riscv right now (Also for the architecture which hasn't xchg16 atomic instruction.) Change V4: - Remove custom sub-word xchg implementation - Add ARCH_USE_QUEUED_SPINLOCKS_XCHG32 in locking/qspinlock Change V3: - Coding convention by Peter Zijlstra's advices Change V2: - Coding convention in cmpxchg.h - Re-implement short xchg - Remove char & cmpxchg implementations Guo Ren (3): riscv: cmpxchg.h: Cleanup unused code riscv: cmpxchg.h: Merge macros riscv: cmpxchg.h: Implement xchg for short Michael Clark (1): riscv: Convert custom spinlock/rwlock to generic qspinlock/qrwlock arch/riscv/Kconfig | 2 + arch/riscv/include/asm/Kbuild | 3 + arch/riscv/include/asm/cmpxchg.h | 211 ++++++------------------ arch/riscv/include/asm/spinlock.h | 126 +------------- arch/riscv/include/asm/spinlock_types.h | 15 +- 5 files changed, 58 insertions(+), 299 deletions(-) -- 2.17.1