Re: [PATCH v3 0/8] Rework random blocking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Dec 26, 2019, at 8:04 PM, Stephan Mueller <smueller@xxxxxxxxxx> wrote:
> 
> Am Donnerstag, 26. Dezember 2019, 12:12:29 CET schrieb Andy Lutomirski:
> 
> Hi Andy,
> 
>>>> On Dec 26, 2019, at 5:29 PM, Stephan Müller <smueller@xxxxxxxxxx> wrote:
>>> 
>>> Am Montag, 23. Dezember 2019, 09:20:43 CET schrieb Andy Lutomirski:
>>> 
>>> Hi Andy,
>>> 
>>>> There are some open questions and future work here:
>>>> 
>>>> Should the kernel provide an interface to get software-generated
>>>> "true random" numbers?  I can think of only one legitimate reason to
>>>> use such an interface: compliance with government standards.  If the
>>>> kernel provides such an interface going forward, I think it should
>>>> be a brand new character device, and it should have a default mode
>>>> 0440 or similar.  Software-generated "true random numbers" are a
>>>> very limited resource, and resource exhaustion is a big deal.  Ask
>>>> anyone who has twiddled their thumbs while waiting for gnupg to
>>>> generate a key.  If we think the kernel might do such a thing, then
>>>> patches 5-8 could be tabled for now.
>>> 
>>> What about offering a compile-time option to enable or disable such code?
>>> Note, with the existing random.c code base, there is no need to have a
>>> separate blocking_pool. The ChaCha20 DRNG could be used for that very same
>>> purpose, provided that in case these true random numbers are generated
>>> when
>>> the Chacha20 DRNG received an equal amount of "unused" entropy.
>> 
>> This scares me. The DRNG should be simple and easy to understand. If we’re
>> tapping extra numbers in some weird way, then I would be more comfortable
>> with some clear assurance that this doesn’t break the security. If we’re
>> tapping numbers in the same way as normal urandom, then I don’t really see
>> the point.
> 
> Agreed. I was just trying to outline that the removal of the blocking_pool is 
> a good thing. Even when we decide that random.c should receive a TRNG, we do 
> not need to re-add a blocking pool, but can easily use the existing ChaCha20 
> DRNG (most likely with its own instance).

Fair enough.

> 
>>>> Alternatively, perhaps the kernel should instead provide a
>>>> privileged interface to read out raw samples from the various
>>>> entropy sources, and users who care could have a user daemon that
>>>> does something intelligent with them.  This would push the mess of
>>>> trying to comply with whatever standards are involved to userspace.
>>>> Userspace could then export "true randomness" via CUSE if it is so
>>>> inclined, or could have a socket with a well-known name, or whatever
>>>> else seems appropriate.
>>> 
>>> With the patch set v26 of my LRNG I offer another possible alternative
>>> avoiding any additional character device file and preventing the
>>> starvation of legitimate use cases: the LRNG has an entropy pool that
>>> leaves different levels of entropy in the pool depending on the use cases
>>> of this data.
>>> 
>>> If an unprivileged caller requests true random data, at least 1024 bits of
>>> entropy is left in the pool. I.e. all entropy above that point is
>>> available
>>> for this request type. Note, even namespaces fall into this category
>>> considering that unprivileged users can create a user name space in which
>>> they can become root.
>> 
>> This doesn’t solve the problem. If two different users run stupid programs
>> like gnupg, they will starve each other.
> 
> But such scenario will always occur, will it not? If there are two callers for 
> a limited resource, they will content if one "over-uses" the resource. My idea 
> was to provide an interface where its use does not starve other more relevant 
> use cases (e.g. seeding of the DRNGs). I.e. a user of a TRNG has the right to 
> be DoSed - that is the price to pay when using this concept.

Maybe I’m just cynical, but I expect that, if the feature is available to everyone, then lots of user programmers will use it even though they don’t need to.  If, on the other hand, there is a barrier to entry, then people will be more likely to stop and think.

Even gnupg could have been more clever — when generating a 4096-bit RSA key, there is no actual need for 4096 bits of entropy, however entropy is defined. 256 bits would have been more than adequate.

(FWIW, my personal view is that 512 bits, in the sense of “the distribution being sampled produces no output with probability greater than about 2^-512”, is a good upper limit for even the most paranoid.  This is because it’s reasonable to assume that an attacker can’t do more than 2^128 operations. As djb has noted, multi-target attacks mean that you can amplify success probability in some cases by a factor that won’t exceed 2^128.  Some day, quantum computers might square-root everything, giving 512 bits. Actually, quantum computers won’t square root everything, but much more complicated analysis is needed to get a believable bound.)

—Andy




[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux