Re: [PATCH] arch/s390/crypto/prng: Stop being stupidly wasteful with /dev/urandom

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21.03.19 13:15, George Spelvin wrote:
> generate_entropy() was asking for 8K of high-quality entropy per 64 bytes
> of output, which is ridiculous.  Only ask get_random_bytes for the
> requested amount of entropy, and XOR with the hash of the timer-based
> seed material.
>
> (A possible followon improvement: the msbits of get_tod_clock_fast() are
> unchanging and therefore useless; you could store and hash only the low
> 32 bits and save half the hashing effort.)
>
> A second latent bugfix: prng_tdes_seed() was not copying the final partial
> doubleword of get_random_bytes() output to the parameter block.  It's
> only ever called with 8- and 16-byte requests, so the bug never triggered,
> but fix it so it won't be (silently) buggy in future.
>
> Not tested as I don't have access to an S/390, but the code is
> straightforward.
>
> Signed-off-by: George Spelvin <lkml@xxxxxxx>
> Cc: linux-s390@xxxxxxxxxxxxxxx
> Cc: Harald Freudenberger <freude@xxxxxxxxxxxxxxxxxx>
> Cc: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>
> Fixes: 57127645d79d2e83e801f141f7d03f64accf28aa
> Cc: Jan Glauber <jan.glauber@xxxxxxxxxx>
> Fixes: 1b2782948997cf5a0d1747de13d43ba7dfa7c543
> ---
>  arch/s390/crypto/prng.c | 36 ++++++++++++++++--------------------
>  1 file changed, 16 insertions(+), 20 deletions(-)
>
> diff --git a/arch/s390/crypto/prng.c b/arch/s390/crypto/prng.c
> index a97a1802cfb4..e3f649e66c91 100644
> --- a/arch/s390/crypto/prng.c
> +++ b/arch/s390/crypto/prng.c
> @@ -122,33 +122,30 @@ static const u8 initial_parm_block[32] __initconst = {
>   */
>  static int generate_entropy(u8 *ebuf, size_t nbytes)
>  {
> -	int n, ret = 0;
> -	u8 *pg, *h, hash[64];
> +	int n, i, ret = 0;
> +	u64 *pg;
> +	u8 hash[64];
>
>  	/* allocate 2 pages */
> -	pg = (u8 *) __get_free_pages(GFP_KERNEL, 1);
> +	pg = (u64 *) __get_free_pages(GFP_KERNEL, 1);
>  	if (!pg) {
>  		prng_errorflag = PRNG_GEN_ENTROPY_FAILED;
>  		return -ENOMEM;
>  	}
>
> +	get_random_bytes(ebuf, nbytes);
>  	while (nbytes) {
> -		/* fill pages with urandom bytes */
> -		get_random_bytes(pg, 2*PAGE_SIZE);
> -		/* exor pages with 1024 stckf values */
> -		for (n = 0; n < 2 * PAGE_SIZE / sizeof(u64); n++) {
> -			u64 *p = ((u64 *)pg) + n;
> -			*p ^= get_tod_clock_fast();
> -		}
> -		n = (nbytes < sizeof(hash)) ? nbytes : sizeof(hash);
> -		if (n < sizeof(hash))
> -			h = hash;
> -		else
> -			h = ebuf;
> +		/* fill pages with 1024 stckf values */
> +		for (n = 0; n < 2 * PAGE_SIZE / sizeof(u64); n++)
> +			pg[n] = get_tod_clock_fast();
> +
>  		/* hash over the filled pages */
> -		cpacf_kimd(CPACF_KIMD_SHA_512, h, pg, 2*PAGE_SIZE);
> -		if (n < sizeof(hash))
> -			memcpy(ebuf, hash, n);
> +		cpacf_kimd(CPACF_KIMD_SHA_512, hash, (u8 *)pg, 2*PAGE_SIZE);
> +
> +		/* xor with the urandom bytes */
> +		n = (nbytes < sizeof(hash)) ? nbytes : sizeof(hash);
> +		for (i = 0; i < n; i++)
> +			ebuf[i] ^= hash[i];
>  		ret += n;
>  		ebuf += n;
>  		nbytes -= n;
> @@ -185,13 +182,12 @@ static void prng_tdes_seed(int nbytes)
>  	get_random_bytes(buf, nbytes);
>
>  	/* Add the entropy */
> -	while (nbytes >= 8) {
> +	while (nbytes = 0) {
>  		*((__u64 *)prng_data->prngws.parm_block) ^= *((__u64 *)(buf+i));
>  		prng_tdes_add_entropy();
>  		i += 8;
>  		nbytes -= 8;
>  	}
> -	prng_tdes_add_entropy();
>  	prng_data->prngws.reseed_counter = 0;
>  }
>
Hi George

well, yes it sounds a little bit paranoid to pull 8k of random data and exor it with
1024 get_tod_clock_fast (which is stckf) per 32 byte of generated random buffer.

The calculation behind is:
- every delivered byte should have 8 bit of entropy (100%).
- each stckf invocation 'carries' 0.5 bit of entropy.
- so to produce 32 bytes with 100% entropy there are 512 stckf calls required.
And this was the original implementation which used 1 page, pre-filled it with
get_random_bytes() (which is not so expensive as it is the kernel interface
for /dev/urandom NOT /dev/random), then exor 512 stckf calls onto the buffer
and hash it.

Then someone explained to me that a sha256 can never produce 256 bits of
entropy as there may exist collisions. Someone must assume that the output
of sha256 will have 255 bits entropy at most. However, I decided to double all the
buffer and use sha512 to be on the save side and be able to hold the
statement about the 256 bits entropy within the 32 bytes of random produced.

You are right, there could be improvement:
- Your move of the pre-filling of the buffer before the loop makes some sence.
  So the buffer is just initial filled with get_random_bytes() and for every
  32 bytes hash then the left-over from the last stckf() exor loop is used as
  initial buffer value.
- It's questionable if the pre-filling with get_random_bytes() does make any sence.
  However, if the stckf function ever decided to return just 0s the pre-filling of
  the buffer is a good idea.
- In fact for gathering entropy just the right 32 bits of the stckf value could be
  used instead of the full 64 bits. So the buffer could be just one page.
  (Or even fewer when only the lsbs are used or the stckf values exored into
  the buffer could overlap for some bytes).

Based on your patch, I'll do some investigation here...

Thanks and regards
Harald Freudenberger




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux