Re: [PATCH v3 9/9] crypto: shash: Remove VLA usage in unaligned hashing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 28, 2018 at 05:28:43PM -0700, Kees Cook wrote:
> In the quest to remove all stack VLA usage from the kernel[1], this uses
> the newly defined max alignment to perform unaligned hashing to avoid
> VLAs, and drops the helper function while adding sanity checks on the
> resulting buffer sizes. Additionally, the __aligned_largest macro is
> removed since this helper was the only user.
> 
> [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@xxxxxxxxxxxxxx
> 
> Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx>
> ---
>  crypto/shash.c               | 19 ++++++++-----------
>  include/linux/compiler-gcc.h |  1 -
>  2 files changed, 8 insertions(+), 12 deletions(-)
> 
> diff --git a/crypto/shash.c b/crypto/shash.c
> index ab6902c6dae7..8081c5e03770 100644
> --- a/crypto/shash.c
> +++ b/crypto/shash.c
> @@ -73,13 +73,6 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
>  }
>  EXPORT_SYMBOL_GPL(crypto_shash_setkey);
>  
> -static inline unsigned int shash_align_buffer_size(unsigned len,
> -						   unsigned long mask)
> -{
> -	typedef u8 __aligned_largest u8_aligned;
> -	return len + (mask & ~(__alignof__(u8_aligned) - 1));
> -}
> -
>  static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
>  				  unsigned int len)
>  {
> @@ -88,11 +81,13 @@ static int shash_update_unaligned(struct shash_desc *desc, const u8 *data,
>  	unsigned long alignmask = crypto_shash_alignmask(tfm);
>  	unsigned int unaligned_len = alignmask + 1 -
>  				     ((unsigned long)data & alignmask);
> -	u8 ubuf[shash_align_buffer_size(unaligned_len, alignmask)]
> -		__aligned_largest;
> +	u8 ubuf[MAX_ALGAPI_ALIGNMASK + 1];
>  	u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
>  	int err;
>  
> +	if (WARN_ON(buf + unaligned_len > ubuf + sizeof(ubuf)))
> +		return -EINVAL;
> +

How is 'ubuf' guaranteed to be large enough?  You removed the __aligned
attribute, so 'ubuf' can have any alignment.  So the aligned pointer 'buf' may
be as high as '&ubuf[alignmask]'.  Then, up to 'alignmask' bytes of data will be
copied into 'buf'... resulting in up to '2 * alignmask' bytes needed in 'ubuf'.
But you've only guaranteed 'alignmask + 1' bytes.

>  	if (unaligned_len > len)
>  		unaligned_len = len;
>  
> @@ -124,11 +119,13 @@ static int shash_final_unaligned(struct shash_desc *desc, u8 *out)
>  	unsigned long alignmask = crypto_shash_alignmask(tfm);
>  	struct shash_alg *shash = crypto_shash_alg(tfm);
>  	unsigned int ds = crypto_shash_digestsize(tfm);
> -	u8 ubuf[shash_align_buffer_size(ds, alignmask)]
> -		__aligned_largest;
> +	u8 ubuf[SHASH_MAX_DIGESTSIZE];
>  	u8 *buf = PTR_ALIGN(&ubuf[0], alignmask + 1);
>  	int err;
>  
> +	if (WARN_ON(buf + ds > ubuf + sizeof(ubuf)))
> +		return -EINVAL;
> +

Similar problem here.  Wouldn't 'ubuf' need to be of size 'alignmask + ds'?

>  	err = shash->final(desc, buf);
>  	if (err)
>  		goto out;
> diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
> index f1a7492a5cc8..1f1cdef36a82 100644
> --- a/include/linux/compiler-gcc.h
> +++ b/include/linux/compiler-gcc.h
> @@ -125,7 +125,6 @@
>   */
>  #define __pure			__attribute__((pure))
>  #define __aligned(x)		__attribute__((aligned(x)))
> -#define __aligned_largest	__attribute__((aligned))
>  #define __printf(a, b)		__attribute__((format(printf, a, b)))
>  #define __scanf(a, b)		__attribute__((format(scanf, a, b)))
>  #define __attribute_const__	__attribute__((__const__))
> -- 
> 2.17.1
> 
> --
> dm-devel mailing list
> dm-devel@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel



[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux