Re: [PATCH] arch/s390/crypto/prng: Stop being stupidly wasteful with /dev/urandom

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From freude@xxxxxxxxxxxxx Wed Apr  3 11:02:18 2019
> Authentication-Results: mx.sdf.org; dkim=none
> Subject: Re: [PATCH] arch/s390/crypto/prng: Stop being stupidly wasteful with
>  /dev/urandom
> To: George Spelvin <lkml@xxxxxxx>,
>         Harald Freudenberger <freude@xxxxxxxxxxxxxxxxxx>,
>         linux-s390@xxxxxxxxxxxxxxx
> Cc: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>,
>         Jan Glauber <jan.glauber@xxxxxxxxxx>
> References: <201903211215.x2LCFIr3029539@xxxxxxx>
> From: Harald Freudenberger <freude@xxxxxxxxxxxxx>
> Date: Wed, 3 Apr 2019 12:51:48 +0200
> User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
>  Thunderbird/60.6.1
> MIME-Version: 1.0
> In-Reply-To: <201903211215.x2LCFIr3029539@xxxxxxx>
> Content-Type: text/plain; charset=utf-8
> Content-Transfer-Encoding: 8bit
> Content-Language: en-US
> X-TM-AS-GCONF: 00
> x-cbid: 19040310-0008-0000-0000-000002D5C563
> X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused
> x-cbparentid: 19040310-0009-0000-0000-00002241CCDB
> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-04-03_07:,,
>  signatures=0
> X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
>  malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0
>  clxscore=1011 lowpriorityscore=0 mlxscore=0 impostorscore=0
>  mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx
>  scancount=1 engine=8.0.1-1810050000 definitions=main-1904030074
>
> On 21.03.19 13:15, George Spelvin wrote:
> > generate_entropy() was asking for 8K of high-quality entropy per 64 bytes
> > of output, which is ridiculous.  Only ask get_random_bytes for the
> > requested amount of entropy, and XOR with the hash of the timer-based
> > seed material.
> >
> > (A possible followon improvement: the msbits of get_tod_clock_fast() are
> > unchanging and therefore useless; you could store and hash only the low
> > 32 bits and save half the hashing effort.)
> >
> > A second latent bugfix: prng_tdes_seed() was not copying the final partial
> > doubleword of get_random_bytes() output to the parameter block.  It's
> > only ever called with 8- and 16-byte requests, so the bug never triggered,
> > but fix it so it won't be (silently) buggy in future.
> >
> > Not tested as I don't have access to an S/390, but the code is
> > straightforward.
> >
> > Signed-off-by: George Spelvin <lkml@xxxxxxx>
> > Cc: linux-s390@xxxxxxxxxxxxxxx
> > Cc: Harald Freudenberger <freude@xxxxxxxxxxxxxxxxxx>
> > Cc: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>
> > Fixes: 57127645d79d2e83e801f141f7d03f64accf28aa
> > Cc: Jan Glauber <jan.glauber@xxxxxxxxxx>
> > Fixes: 1b2782948997cf5a0d1747de13d43ba7dfa7c543
> > ---
> >  arch/s390/crypto/prng.c | 36 ++++++++++++++++--------------------
> >  1 file changed, 16 insertions(+), 20 deletions(-)
> >
> > diff --git a/arch/s390/crypto/prng.c b/arch/s390/crypto/prng.c
> > index a97a1802cfb4..e3f649e66c91 100644
> > --- a/arch/s390/crypto/prng.c
> > +++ b/arch/s390/crypto/prng.c
> > @@ -122,33 +122,30 @@ static const u8 initial_parm_block[32] __initconst = {
> >   */
> >  static int generate_entropy(u8 *ebuf, size_t nbytes)
> >  {
> > -	int n, ret = 0;
> > -	u8 *pg, *h, hash[64];
> > +	int n, i, ret = 0;
> > +	u64 *pg;
> > +	u8 hash[64];
> >
> >  	/* allocate 2 pages */
> > -	pg = (u8 *) __get_free_pages(GFP_KERNEL, 1);
> > +	pg = (u64 *) __get_free_pages(GFP_KERNEL, 1);
> >  	if (!pg) {
> >  		prng_errorflag = PRNG_GEN_ENTROPY_FAILED;
> >  		return -ENOMEM;
> >  	}
> >
> > +	get_random_bytes(ebuf, nbytes);
> >  	while (nbytes) {
> > -		/* fill pages with urandom bytes */
> > -		get_random_bytes(pg, 2*PAGE_SIZE);
> > -		/* exor pages with 1024 stckf values */
> > -		for (n = 0; n < 2 * PAGE_SIZE / sizeof(u64); n++) {
> > -			u64 *p = ((u64 *)pg) + n;
> > -			*p ^= get_tod_clock_fast();
> > -		}
> > -		n = (nbytes < sizeof(hash)) ? nbytes : sizeof(hash);
> > -		if (n < sizeof(hash))
> > -			h = hash;
> > -		else
> > -			h = ebuf;
> > +		/* fill pages with 1024 stckf values */
> > +		for (n = 0; n < 2 * PAGE_SIZE / sizeof(u64); n++)
> > +			pg[n] = get_tod_clock_fast();
> > +
> >  		/* hash over the filled pages */
> > -		cpacf_kimd(CPACF_KIMD_SHA_512, h, pg, 2*PAGE_SIZE);
> > -		if (n < sizeof(hash))
> > -			memcpy(ebuf, hash, n);
> > +		cpacf_kimd(CPACF_KIMD_SHA_512, hash, (u8 *)pg, 2*PAGE_SIZE);
> > +
> > +		/* xor with the urandom bytes */
> > +		n = (nbytes < sizeof(hash)) ? nbytes : sizeof(hash);
> > +		for (i = 0; i < n; i++)
> > +			ebuf[i] ^= hash[i];
> >  		ret += n;
> >  		ebuf += n;
> >  		nbytes -= n;
> > @@ -185,13 +182,12 @@ static void prng_tdes_seed(int nbytes)
> >  	get_random_bytes(buf, nbytes);
> >
> >  	/* Add the entropy */
> > -	while (nbytes >= 8) {
> > +	while (nbytes = 0) {
> >  		*((__u64 *)prng_data->prngws.parm_block) ^= *((__u64 *)(buf+i));
> >  		prng_tdes_add_entropy();
> >  		i += 8;
> >  		nbytes -= 8;
> >  	}
> > -	prng_tdes_add_entropy();
> >  	prng_data->prngws.reseed_counter = 0;
> >  }
> >
> Hi George
>
> well, yes it sounds a little bit paranoid to pull 8k of random data and exor it with
> 1024 get_tod_clock_fast (which is stckf) per 32 byte of generated random buffer.
>
> The calculation behind is:
> - every delivered byte should have 8 bit of entropy (100%).
> - each stckf invocation 'carries' 0.5 bit of entropy.
> - so to produce 32 bytes with 100% entropy there are 512 stckf calls required.
> And this was the original implementation which used 1 page, pre-filled it with
> get_random_bytes() (which is not so expensive as it is the kernel interface
> for /dev/urandom NOT /dev/random), then exor 512 stckf calls onto the buffer
> and hash it.
>
> Then someone explained to me that a sha256 can never produce 256 bits of
> entropy as there may exist collisions. Someone must assume that the output
> of sha256 will have 255 bits entropy at most. However, I decided to double all the
> buffer and use sha512 to be on the save side and be able to hold the
> statement about the 256 bits entropy within the 32 bytes of random produced.
>
> You are right, there could be improvement:
> - Your move of the pre-filling of the buffer before the loop makes some sence.
>   So the buffer is just initial filled with get_random_bytes() and for every
>   32 bytes hash then the left-over from the last stckf() exor loop is used as
>   initial buffer value.
> - It's questionable if the pre-filling with get_random_bytes() does make any sence.
>   However, if the stckf function ever decided to return just 0s the pre-filling of
>   the buffer is a good idea.
> - In fact for gathering entropy just the right 32 bits of the stckf value could be
>   used instead of the full 64 bits. So the buffer could be just one page.
>   (Or even fewer when only the lsbs are used or the stckf values exored into
>   the buffer could overlap for some bytes).
>
> Based on your patch, I'll do some investigation here...
>
> Thanks and regards
> Harald Freudenberger
>
>



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux