>> Could you share the error log please? > > OK, I spotted one issue with this code: > > arch/arm/crypto/sha256-core.S: Assembler messages: > arch/arm/crypto/sha256-core.S:1847: Error: invalid constant (ffffefb0) > after fixup > > This is caused by the fact that, when building the integer-only code > for an older architecture, the conditional compilation produces a > slightly bigger preceding function, and the symbol K256 is out of > range for the adr instruction. > > @Jean-Christophe: is that the same problem that you hit? > > @Andy: I propose we do something similar as in the bsaes code: > > #ifdef __thumb__ > #define adrl adr > #endif > > and replace the offending line with > > adrl r14,K256 Sorry about delay. Yes, that would do. I did test all combinations, but all "my" combinations, i.e. without __KERNEL__ defined :-( And without __KERNEL__ there are few extra instructions in integer-only subroutine that "push" instruction in question code toward higher address, so that constant was ffffefc0, which can be encoded. Anyway, I've chosen to add that #define next to .thumb directive. See attached. Ard, you have mentioned that you've verified it on big-endian, but I've spotted little-endian dependency (see #ifndef __ARMEB__ in attached). I guess that it worked for you either because it was NEON that was tested (it does work as is) or __LINUX_ARM_ARCH__ was less than 7 (in which case it uses endian-neutral byte-by-byte data load). Can you confirm either?
diff --git a/crypto/sha/asm/sha256-armv4.pl b/crypto/sha/asm/sha256-armv4.pl index 4fee74d..fac0533 100644 --- a/crypto/sha/asm/sha256-armv4.pl +++ b/crypto/sha/asm/sha256-armv4.pl @@ -73,7 +73,9 @@ $code.=<<___ if ($i<16); eor $t0,$e,$e,ror#`$Sigma1[1]-$Sigma1[0]` add $a,$a,$t2 @ h+=Maj(a,b,c) from the past eor $t0,$t0,$e,ror#`$Sigma1[2]-$Sigma1[0]` @ Sigma1(e) +# ifndef __ARMEB__ rev $t1,$t1 +# endif #else @ ldrb $t1,[$inp,#3] @ $i add $a,$a,$t2 @ h+=Maj(a,b,c) from the past @@ -166,6 +168,7 @@ $code=<<___; #else .syntax unified # ifdef __thumb2__ +# define adrl adr .thumb # else .code 32 @@ -460,7 +463,7 @@ sha256_block_data_order_neon: stmdb sp!,{r4-r12,lr} sub $H,sp,#16*4+16 - adr $Ktbl,K256 + adrl $Ktbl,K256 bic $H,$H,#15 @ align for 128-bit stores mov $t2,sp mov sp,$H @ alloca