Re: [PATCH 1/2] arm64 aes: fix encryption of unaligned data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26 July 2014 15:13, Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx> wrote:
> On 26 July 2014 01:40, Mikulas Patocka <mpatocka@xxxxxxxxxx> wrote:
>> cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
>> See https://bugzilla.redhat.com/show_bug.cgi?id=1122937
>>
>> The bug is caused by incorrect handling of unaligned data in
>> arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
>> on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
>> socket to encrypt data in the buffer. The arm64 crypto accelerator causes
>> data corruption or crashes in the scatterwalk_pagedone.
>>
>> This patch fixes the bug by passing the residue bytes that were not
>> processed as the last parameter to blkcipher_walk_done.
>>
>> Signed-off-by: Mikulas Patocka <mpatocka@xxxxxxxxxx>
>>
>
> Acked-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
>
> Thanks for the patch. This correctly fixes a thinko on my part
> regarding the guarantees offered by the blkcipher API. Unfortunately,
> this wasn't caught by the tcrypt test suite, so I will propose some
> patches later to address cases like this.
>
> BTW using kernel crypto with AF_ALG is fairly pointless when using
> crypto instructions instead of crypto accelerator peripherals, should
> we change anything on the kernel side so we don't expose these
> drivers?
>
> @Catalin: this is a bug fix for the code that was merged this cycle. I
> would recommend to merge this for 3.16, but if not, could you please
> add a cc stable? Or ack it and perhaps Herbert can take both? (There
> is a similar patch for ARM as well)
>

... only this patch fails to repair the ECB case.
@Mikulas: could you do a v2 and include ECB encryption/decryption?

Cheers,
Ard.


>> Index: linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> ===================================================================
>> --- linux-3.16.0-0.rc6.git1.1.fc21.aarch64.orig/arch/arm64/crypto/aes-glue.c
>> +++ linux-3.16.0-0.rc6.git1.1.fc21.aarch64/arch/arm64/crypto/aes-glue.c
>> @@ -106,7 +106,7 @@ static int ecb_encrypt(struct blkcipher_
>>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
>>                 aes_ecb_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key_enc, rounds, blocks, first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>         return err;
>> @@ -128,7 +128,7 @@ static int ecb_decrypt(struct blkcipher_
>>         for (first = 1; (blocks = (walk.nbytes / AES_BLOCK_SIZE)); first = 0) {
>>                 aes_ecb_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key_dec, rounds, blocks, first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>         return err;
>> @@ -151,7 +151,7 @@ static int cbc_encrypt(struct blkcipher_
>>                 aes_cbc_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key_enc, rounds, blocks, walk.iv,
>>                                 first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>         return err;
>> @@ -174,7 +174,7 @@ static int cbc_decrypt(struct blkcipher_
>>                 aes_cbc_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key_dec, rounds, blocks, walk.iv,
>>                                 first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>         return err;
>> @@ -243,7 +243,7 @@ static int xts_encrypt(struct blkcipher_
>>                 aes_xts_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key1.key_enc, rounds, blocks,
>>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>
>> @@ -267,7 +267,7 @@ static int xts_decrypt(struct blkcipher_
>>                 aes_xts_decrypt(walk.dst.virt.addr, walk.src.virt.addr,
>>                                 (u8 *)ctx->key1.key_dec, rounds, blocks,
>>                                 (u8 *)ctx->key2.key_enc, walk.iv, first);
>> -               err = blkcipher_walk_done(desc, &walk, 0);
>> +               err = blkcipher_walk_done(desc, &walk, walk.nbytes % AES_BLOCK_SIZE);
>>         }
>>         kernel_neon_end();
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux