Re: [PATCH v4 11/14] treewide: Prepare to remove VLA usage for AHASH_REQUEST_ON_STACK

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jul 18, 2018 at 8:19 AM, Ard Biesheuvel
<ard.biesheuvel@xxxxxxxxxx> wrote:
> On 18 July 2018 at 23:50, Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx> wrote:
>> On 18 July 2018 at 05:59, Arnd Bergmann <arnd@xxxxxxxx> wrote:
>>> On Sun, Jul 15, 2018 at 6:28 AM, Kees Cook <keescook@xxxxxxxxxxxx> wrote:
>>>>
>>>> After my ahash to shash conversions, only ccm is left as an ahash
>>>> user, since it actually uses sg. But with the hard-coded value reduced
>>>> to 376, this doesn't trip the frame warnings any more. :)
>>>>
>>>> I'll send an updated series soon.
>>>
>>> Maybe we should get rid of that one as well then and remove
>>> AHASH_REQUEST_ON_STACK()?
>>>
>>> I see that Ard (now on Cc) added this usage only recently. Looking
>>> at the code some more, I also find that the descsize is probably
>>> much smaller than 376 for all possible cases   of "cbcmac(*)",
>>> either alg->cra_blocksize plus a few bytes or sizeof(mac_desc_ctx)
>>> (i.e. 20) for arch/arm64/crypto/aes-glue.c.
>>>
>>> Walking the sglist here means open-coding a shash_ahash_update()
>>> implementation in crypto_ccm_auth(), that that doesn't seem to
>>> add much complexity over what it already has to do to chain
>>> the sglist today.
>>>
>>
>> It would be better to add a variably sized ahash request member to
>> struct crypto_ccm_req_priv_ctx, the only problem is that the last
>> member of that struct (skreq) is variably sized already, so it would
>> involve having a struct ahash_request pointer pointing into the same
>> struct, after the skreq member.
>
> Actually, I think the below should already do the trick: ahreq and
> skreq are not used at the same time, so we can stick them in a union,
> and take the max() of the reqsize to ensure there's enough empty space
> after it.
>
> --------8<----------
> diff --git a/crypto/ccm.c b/crypto/ccm.c
> index 0a083342ec8c..b242fd0d3262 100644
> --- a/crypto/ccm.c
> +++ b/crypto/ccm.c
> @@ -50,7 +50,10 @@ struct crypto_ccm_req_priv_ctx {
>         u32 flags;
>         struct scatterlist src[3];
>         struct scatterlist dst[3];
> -       struct skcipher_request skreq;
> +       union {
> +               struct ahash_request ahreq;
> +               struct skcipher_request skreq;
> +       };
>  };
>
>  struct cbcmac_tfm_ctx {
> @@ -181,7 +184,7 @@
>         struct crypto_ccm_req_priv_ctx *pctx = crypto_ccm_reqctx(req);
>         struct crypto_aead *aead = crypto_aead_reqtfm(req);
>         struct crypto_ccm_ctx *ctx = crypto_aead_ctx(aead);
> -       AHASH_REQUEST_ON_STACK(ahreq, ctx->mac);
> +       struct ahash_request *ahreq = &pctx->ahreq;
>         unsigned int assoclen = req->assoclen;
>         struct scatterlist sg[3];
>         u8 *odata = pctx->odata;
> @@ -427,7 +430,7 @@
>         crypto_aead_set_reqsize(
>                 tfm,
>                 align + sizeof(struct crypto_ccm_req_priv_ctx) +
> -               crypto_skcipher_reqsize(ctr));
> +               max(crypto_ahash_reqsize(mac), crypto_skcipher_reqsize(ctr)));
>
>         return 0;

Oh, this is lovely! Thank you! Shall I add your S-o-b and add it to the series?

-Kees

-- 
Kees Cook
Pixel Security



[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux