On Wed, Nov 2, 2016 at 5:25 PM, Jason A. Donenfeld <Jason@xxxxxxxxx> wrote: > These architectures select HAVE_EFFICIENT_UNALIGNED_ACCESS: > > s390 arm arm64 powerpc x86 x86_64 > > So, these will use the original old code. > > The architectures that will thus use the new code are: > > alpha arc avr32 blackfin c6x cris frv h7300 hexagon ia64 m32r m68k > metag microblaze mips mn10300 nios2 openrisc parisc score sh sparc > tile um unicore32 xtensa What I have found in practice from helping maintain a security library and running benchmarks until my eyes bled.... UNALIGNED_ACCESS is a kiss of death. It effectively prohibits -O3 and above due to undefined behavior in C and problems with GCC vectorization. In the bigger picture, it simply slows things down. Once we moved away from UNALIGNED_ACCESS and started testing at -O3 and -O5, the benchmarks enjoyed non-trivial speedups on top of any speedups we were trying to achieve with hand tuned assembly language routines. Effectively, the best speedup was the sum of C-code and ASM; they were not disjoint as they appear. The one wrinkle for UNALIGNED_ACCESS is Bernstein's compressed tables (https://cr.yp.to/antiforgery/cachetiming-20050414.pdf). UNALIGNED_ACCESS meets some security goals. The techniques from Bernstein's paper apply equally well to AES, Camellia and other table-driven implementations. Painting with a broad brush (and as far as I know), the kernel is not observing the recommendations. My apologies if I parsed things incorrectly. Jeff -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html