Re: Low random entropy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]





On 05/29/2017 03:13 AM, Rob Kampen wrote:
On 29/05/17 15:46, Robert Moskowitz wrote:


On 05/28/2017 06:57 PM, Rob Kampen wrote:
On 28/05/17 23:56, Leon Fauster wrote:
Am 28.05.2017 um 12:16 schrieb Robert Moskowitz <rgm@xxxxxxxxxxxxxxx>:



On 05/28/2017 04:24 AM, Tony Mountifield wrote:
In article <792718e8-f403-1dea-367d-977b157af82c@xxxxxxxxxxxxxxx>,
Robert Moskowitz <rgm@xxxxxxxxxxxxxxx> wrote:
On 05/26/2017 08:35 PM, Leon Fauster wrote:
drops back to 30! for a few minutes.  Sigh.
http://issihosts.com/haveged/

EPEL: yum install haveged
WOW!!!

installed, enabled, and started.

Entropy jumped from ~130 bits to ~2000 bits

thanks

Note to anyone running a web server, or creating certs. You need
entropy. Without it your keys are weak and attackable. Probably even
known already.
Interesting. I just did a quick check of the various servers I support, and have noticed that all the CentOS 5 and 6 systems report entropy in the low hundreds of bits, but all the CentOS 4 systems and the one old
FC3 system all report over 3000 bits.

Since they were all pretty much stock installs, what difference between
the versions might explain what I observed?
This is partly why so many certs found in the U of Mich study are weak and factorable. So many systems have inadequate entropy for the generation of key pairs to use in TLS certs. Worst are certs created in firstboot process where at times there is no entropy, but the firstboot still creates its certs.

/var/lib/random-seed and $HOME/.rnd are approaches to mitigate this scenario.

--
LF
so there are mitigations - the question really is: why hasn't redhat made these mitigations the default for their enterprise products - maybe other influences we are unaware of - seems like a huge big hole. With the advent of SSL/TLS being mandated by google et al, every device needs access to entropy.

The challenge is this is so system dependent. Some are just fine with stock install. Others need rng-tools. Still others need haveged. If Redhat were to do anything, it would be to stop making the default cert during firstboot. Rather spin off a one-time process that would wait until there was enough entropy and then create the default cert. Thing is I can come up with situations were that can go wrong.

There are a lot of best practices with certificates and crypto that are not apparent to most admins. I know some things for the crypto work I do (I am the author of the HIP protocol in the IETF). There is just not one size fits all here, and people need to collect clues along with random entropy....

OK that makes sense, I've been admin on linux servers for about 18 years, understand the basics, use certificates for web and email servers. This thread has exposed an area that I'm peripherally aware of - the need to generate with sufficient entropy the cipher that goes across the internet in order to avoid an observer being able to reverse engineer the keys used. I still fail to see why every server and workstation is not set up to do this at some minimum level - i guess linux out of the box does this, the issue is that the minimum from just the basic kernel on most hardware is too little with today's ability to crack ciphers..

Is there some practical guideline out there that puts this in terms that don't require a PhD in mathematics to understand and implement.

For instance I have setup and run mail servers for nearly two decades, only in the last 10+ years with certificates and mandated SSL/TLS - yet the issue of low random entropy is relevant here but until this thread I hadn't taken steps to resolve.

You raise an important point. Alice Wonder earlier said she installs haveged on all her servers as best practice. It is hard to fault that approach...

I am one of the people that make your life difficult. I design secure protocols. I co-chaired the original IPsec work. I created HIP which was used as 'lessons learned' for IKEv2. I contributed to IEEE 802.11i which gave us AES-CCM and 802.1AE which gave us AES-GCM. And I wrote the attack on WiFi WPA-PSK because implementors were not following the guidelines in the spec.

When we are designing these protocols, we talk to the REAL cryptographers and work out: 'oh we need a 256 bit random nonce here and a 32 bit random IV there.' We end up needing lots of randomness in our protocols. Then we toss the spec over the wall to get someone to code it.

Fortunately for the coders, the cryptographers have recognized that the EEs cannot really create real randomness (when we did 802.11i, appendix H had how to build a ring oscillator as a random source. Don't get me going about what Bluetooth did wrong.). So the history of Pseudo Random Generators is long and storied. But a PRNG still needs a good random seed. Don't get me started on failures on this and lessons learned. I worked for ICSAlabs for 14 years and we saw many a broken implementation.

So here we are with 'modern' Linux (which Fedora version is Centos built from?). We know that no board design can feed real random bits as fast as our protocols may need them. Or at least we probably cannot afford such a board. So first you need a good random harvester. Then a good PRNG. How does RH implement the protocols? I have no idea. I just contribute to the problem, not the solution.

All that said, it looks like there are basic tools like rng-tools to install to work with board-level rng functions. Then there is haveged that works with the board itself.

All this said, I should probably write something up and offer it to Centos-docs. I need to talk to a few people...

Bob

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux