From: Matteo Croce <mcroce@xxxxxxxxxxxxx> Replace the assembly mem{cpy,move,set} with C equivalent. Try to access RAM with the largest bit width possible, but without doing unaligned accesses. A further improvement could be to use multiple read and writes as the assembly version was trying to do. Tested on a BeagleV Starlight with a SiFive U74 core, where the improvement is noticeable. v3 -> v4: - incorporate changes from proposed generic version: https://lore.kernel.org/lkml/20210617152754.17960-1-mcroce@xxxxxxxxxxxxxxxxxxx/ v2 -> v3: - alias mem* to __mem* and not viceversa - use __alias instead of a tail call v1 -> v2: - reduce the threshold from 64 to 16 bytes - fix KASAN build - optimize memset Matteo Croce (3): riscv: optimized memcpy riscv: optimized memmove riscv: optimized memset arch/riscv/include/asm/string.h | 18 ++-- arch/riscv/kernel/Makefile | 1 - arch/riscv/kernel/riscv_ksyms.c | 17 ---- arch/riscv/lib/Makefile | 4 +- arch/riscv/lib/memcpy.S | 108 ---------------------- arch/riscv/lib/memmove.S | 64 ------------- arch/riscv/lib/memset.S | 113 ----------------------- arch/riscv/lib/string.c | 154 ++++++++++++++++++++++++++++++++ 8 files changed, 164 insertions(+), 315 deletions(-) delete mode 100644 arch/riscv/kernel/riscv_ksyms.c delete mode 100644 arch/riscv/lib/memcpy.S delete mode 100644 arch/riscv/lib/memmove.S delete mode 100644 arch/riscv/lib/memset.S create mode 100644 arch/riscv/lib/string.c -- 2.31.1