Re: Revised draft of random(7) man page for review

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 15, 2016 at 07:56:09AM +0100, Michael Kerrisk (man-pages) wrote:
>        *  The Linux-specific getrandom(2) system call, available since
>           Linux 3.17.  This system call provides access either to  the
>           same  source  as  /dev/urandom (called the urandom source in
>           this page) or to the same source as /dev/random (called  the
>           random  source  in  this  page).  The default is the urandom
>           source; the random source  is  selected  by  specifying  the
>           GRND_RANDOM flag to the system call.
>....
>    Choice of random device
>        Unless  you are doing long-term key generation (and most likely
>        not even then), you probably shouldn't be using the /dev/random
>        device or getrandom(2) with the GRND_RANDOM flag.
> 

Given the definition earlier, maybe the title of this section should
be called "Choice of random source?"

>    Usage recommendations
>        The kernel random-number generator relies on  entropy  gathered
>        from  device  drivers and other sources of environmental noise.
>        It is designed to produce a small amount of  high-quality  seed
>        material to seed a cryptographically secure pseudorandom number
>        generator (CSPRNG).  It is designed for  security,  not  speed,
>        and  is  poorly  suited  to generating large amounts of crypto‐
>        graphic random data.  Users should be economical in the  amount
>        of seed material that they consume via getrandom(2), /dev/uran‐
>        dom, and /dev/random.
> 
>        ┌─────────────────────────────────────────────────────┐
>        │FIXME                                                │
>        ├─────────────────────────────────────────────────────┤
>        │Is it really  necessary  to  avoid  consuming  large │
>        │amounts from /dev/urandom? Various sources linked to │
>        │by https://bugzilla.kernel.org/show_bug.cgi?id=71211
>        │suggest it is not.                                   │
>        │                                                     │
>        │And: has the answer to the previous question changed │
>        │across kernel versions?                              │
>        └─────────────────────────────────────────────────────┘
>        Consuming unnecessarily large  quantities  of  data  via  these
>        interfaces  will  have  a negative impact on other consumers of
>        randomness.

So "poorly suited" is definitely true.  Also true is that urandom is
not engineered for use for non-cryptographic uses.  It's always going
to be faster to use random(3) for those purposes.

As far as whether or not it has a negative impact, it depends on how
much you trust the underlying cryptographic algorithms.  If the CSPRNG
is seeded correctly with at least 256 bits of entropy that can't be
guessed by the attacker, and if the underlying cryptographic
primitives are secure, then it won't matter.  But *if* there is an
unknown vulnerability in the underlying primitive, and *if* large
amounts of data generated by the CSPRNG would help exploit that
vulnerability, and *if* that bulk amount of CSPRNG output is made
available to an attacker with the capability to break the underlying
cryptographic vulnerability, then there would be a problem.

Obviously, no one knows of such a vulnerability, and I'm fairly
confident that there won't be such a vulnerability across the
different ways we've used to generate the urandom source --- but some
people are professional paranoids, and would argue that we shouldn't
make bulk output of the CSPRNG available for no good reason, just in
case.

>        ┌─────────────────────────────────────────────────────┐
>        │FIXME                                                │
>        ├─────────────────────────────────────────────────────┤
>        │Above: we need to define "negative impact".  Is  the │
>        │only  negative  impact  that  we may slow readers of │
>        │/dev/random, since it will  block  until  sufficient │
>        │entropy has once more accumulated?                   │
>        │                                                     │
>        │And: has the answer to the previous question changed │
>        │across kernel versions?                              │
>        └─────────────────────────────────────────────────────┘

This answer has changed across kernel versions.  As of the changes
made in the 4.8 kernel and newer, we reseed the urandom pool every
five minutes (if it is in use) so it doesn't matter whether you draw
one byte or one gigabyte from the urandom source; it won't slow down
readers of the random source.

Between 3.13 and 4.8, we cap the number of times that entropy that
will be pulled from /dev/random to once every 60 seconds, so it
mattered a bit more, but it wouldn't significantly slow down readers
from /dev/random.

Before 3.13, it would significantly slow down readers of /dev/random
if you were pulling from /dev/urandom even by moderate amounts (for
example, by the Chrome browser, which is using /dev/urandom for
session keys for all TLS connections --- and a Chrome browser
typically opens lots of TLS connections as you browse the web).

	  	     	    		       - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-man" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Kernel Documentation]     [Netdev]     [Linux Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux