The current salsa20_32.c called the salsa20_encrypt_bytes() function with the source and destination buffers in the wrong order. This patch corrects it. Signed-off-by: Tan Swee Heng <thesweeheng@xxxxxxxxx> Some thoughts: I detected this when testing against the large test vector. For small test vectors, the blkcipher_walk code uses source and destination buffers that are equal. Only near the page boundary, does blkcipher_walk uses different buffers. (At least this is true on my system from what I've observed.) Since many of the tcrypt test vectors are small, therefore src == dst frequently when blkcipher_walk-ing. It seems to imply that most of the *_segment() code in the block cipher modes are seldom tested. Perhaps this is something that a new tcrypt framework should address.
diff --git a/arch/x86/crypto/salsa20_32.c b/arch/x86/crypto/salsa20_32.c index 14dd69d..1148a17 100644 --- a/arch/x86/crypto/salsa20_32.c +++ b/arch/x86/crypto/salsa20_32.c @@ -65,21 +65,21 @@ static int encrypt(struct blkcipher_desc *desc, if (likely(walk.nbytes == nbytes)) { - salsa20_encrypt_bytes(ctx, walk.dst.virt.addr, - walk.src.virt.addr, nbytes); + salsa20_encrypt_bytes(ctx, walk.src.virt.addr, + walk.dst.virt.addr, nbytes); return blkcipher_walk_done(desc, &walk, 0); } while (walk.nbytes >= 64) { - salsa20_encrypt_bytes(ctx, walk.dst.virt.addr, - walk.src.virt.addr, + salsa20_encrypt_bytes(ctx, walk.src.virt.addr, + walk.dst.virt.addr, walk.nbytes - (walk.nbytes % 64)); err = blkcipher_walk_done(desc, &walk, walk.nbytes % 64); } if (walk.nbytes) { - salsa20_encrypt_bytes(ctx, walk.dst.virt.addr, - walk.src.virt.addr, walk.nbytes); + salsa20_encrypt_bytes(ctx, walk.src.virt.addr, + walk.dst.virt.addr, walk.nbytes); err = blkcipher_walk_done(desc, &walk, 0); }