On Thu, May 31, 2012 at 7:27 AM, Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> wrote: > On Wed, May 30, 2012 at 01:43:08AM +0200, Mathias Krause wrote: >> The 32 bit variant of cbc(aes) decrypt is using instructions requiring >> 128 bit aligned memory locations but fails to ensure this constraint in >> the code. Fix this by loading the data into intermediate registers with >> load unaligned instructions. >> >> This fixes reported general protection faults related to aesni. >> >> References: https://bugzilla.kernel.org/show_bug.cgi?id=43223 >> Reported-by: Daniel <garkein@xxxxxxxxxxxxxxxx> >> Cc: stable@xxxxxxxxxx [v2.6.39+] >> Signed-off-by: Mathias Krause <minipli@xxxxxxxxxxxxxx> > > Have measured this against increasing alignmask to 15? No, but the latter will likely be much slower as it would need to memmove the data if it's not aligned, right? My patch essentially just breaks the combined "XOR a memory operand with a register" operation into two -- load memory into register, then XOR with registers. It shouldn't be much slower compared to the current version. But it fixes a bug the current version exposes when working on unaligned data. That said, I did some micro benchmark on "pxor (%edx), %xmm0" vs. "movups (%edx), %xmm1; pxor %xmm1, %xmm0" and observed the latter might be even slightly faster! But changing the code to perform better is out of scope for this patch as it should just fix the bug in the code. We can increase performance in a follow up patch. Mathias -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html