Re: [PATCH 0/4] RISC-V CRC optimizations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ignacio,

On 08/03/2025 13:58, Ignacio Encinas Rubio wrote:
Hello!

On 2/3/25 23:04, Eric Biggers wrote:
So, quite positive results.  Though, the fact the msb-first CRCs are (still) so
much slower than lsb-first ones indicates that be64_to_cpu() is super slow on
RISC-V.  That seems to be caused by the rev8 instruction from Zbb not being
used.  I wonder if there are any plans to make the endianness swap macros use
rev8, or if I'm going to have to roll my own endianness swap in the CRC code.
(I assume it would be fine for the CRC code to depend on both Zbb and Zbc.)
I saw this message the other day and started working on a patch, but I
would like to double-check I'm on the right track:

- be64_to_cpu ends up being __swab64 (include/uapi/linux/swab.h)

If Zbb was part of the base ISA, turning CONFIG_ARCH_USE_BUILTIN_BSWAP
would take care of the problem, but it is not the case.

Therefore, we have to define __arch_swab<X> like some "arches"(?) do in
arch/<ARCH>/include/uapi/asm/swab.h

For those functions to be correct in generic kernels, we would need to
use ALTERNATIVE() macros like in arch/riscv/include/asm/bitops.h.
Would this be ok? I'm not sure if the ALTERNATIVEs overhead can be a
problem here.


Yes, using alternatives here is the right way to go. And the only overhead when Zbb is available would be a nop (take a look at lib/csum.c).

Thanks for working on this, looking forward to your patch,

Alex


Thanks in advance :)

_______________________________________________
linux-riscv mailing list
linux-riscv@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/linux-riscv




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux