RE: Kernel panic - encryption/decryption failed when open file on Arm64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ard,

Thanks for the prompt reply. With the patch, there is no panic anymore. But it seems that the encryption/decryption is not successful anyway.

As Herbert points out, "If the page allocation fails in blkcipher_walk_next it'll simply switch over to processing it block by block". So does that mean the encryption/decryption should be successful even if the page allocation fails? Please correct me if I misunderstand anything. Thanks in advance.

Regards,
Shuoran

> -----Original Message-----
> From: Ard Biesheuvel [mailto:ard.biesheuvel@xxxxxxxxxx]
> Sent: Friday, September 09, 2016 6:57 PM
> To: Xiakaixu
> Cc: Herbert Xu; David S. Miller; Theodore Ts'o; Jaegeuk Kim;
> nhorman@xxxxxxxxxxxxx; mh1@xxxxxx; linux-crypto@xxxxxxxxxxxxxxx;
> linux-kernel@xxxxxxxxxxxxxxx; Wangbintian; liushuoran; Huxinwei; zhangzhibin
> (C)
> Subject: Re: Kernel panic - encryption/decryption failed when open file on
> Arm64
> 
> On 9 September 2016 at 11:31, Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
> wrote:
> > On 9 September 2016 at 11:19, xiakaixu <xiakaixu@xxxxxxxxxx> wrote:
> >> Hi,
> >>
> >> After a deeply research about this crash, seems it is a specific
> >> bug that only exists in armv8 board. And it occurs in this function
> >> in arch/arm64/crypto/aes-glue.c.
> >>
> >> static int ctr_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst,
> >>                        struct scatterlist *src, unsigned int nbytes)
> >> {
> >>        ...
> >>
> >>         desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
> >>         blkcipher_walk_init(&walk, dst, src, nbytes);
> >>         err = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE);
> --->
> >> page allocation failed
> >>
> >>         ...
> >>
> >>         while ((blocks = (walk.nbytes / AES_BLOCK_SIZE)))
> {           ---->
> >> walk.nbytes = 0, and skip this loop
> >>                 aes_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr,
> >>                                 (u8 *)ctx->key_enc, rounds, blocks,
> walk.iv,
> >>                                 first);
> >>         ...
> >>                 err = blkcipher_walk_done(desc, &walk,
> >>                                           walk.nbytes %
> AES_BLOCK_SIZE);
> >>         }
> >>         if (nbytes)
> {                                                 ---->
> >> enter this if() statement
> >>                 u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE;
> >>                 u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE;
> >>         ...
> >>
> >>                 aes_ctr_encrypt(tail, tsrc, (u8 *)ctx->key_enc, rounds,
> >> ----> the the sencond input parameter is NULL, so crash...
> >>                                 blocks, walk.iv, first);
> >>         ...
> >>         }
> >>         ...
> >> }
> >>
> >>
> >> If the page allocation failed in the function blkcipher_walk_virt_block(),
> >> the variable walk.nbytes = 0, so it will skip the while() loop and enter
> >> the if(nbytes) statment. But here the varibale tsrc is NULL and it is also
> >> the sencond input parameter of the function aes_ctr_encrypt()... Kernel
> >> Panic...
> >>
> >> I have also researched the similar function in other architectures, and
> >> there if(walk.nbytes) is used, not this if(nbytes) statement in the armv8.
> >> so I think this armv8 function ctr_encrypt() should deal with the page
> >> allocation failed situation.
> >>
> 
> Does this solve your problem?
> 
> diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
> index 5c888049d061..6b2aa0fd6cd0 100644
> --- a/arch/arm64/crypto/aes-glue.c
> +++ b/arch/arm64/crypto/aes-glue.c
> @@ -216,7 +216,7 @@ static int ctr_encrypt(struct blkcipher_desc
> *desc, struct scatterlist *dst,
>                 err = blkcipher_walk_done(desc, &walk,
>                                           walk.nbytes % AES_BLOCK_SIZE);
>         }
> -       if (nbytes) {
> +       if (walk.nbytes % AES_BLOCK_SIZE) {
>                 u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE;
>                 u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE;
>                 u8 __aligned(8) tail[AES_BLOCK_SIZE];
��.n��������+%������w��{.n�����{���{ay�ʇڙ���f���h������_�(�階�ݢj"��������G����?���&��




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux