On 29/05/17 15:46, Robert Moskowitz wrote:
On 05/28/2017 06:57 PM, Rob Kampen wrote:
On 28/05/17 23:56, Leon Fauster wrote:
Am 28.05.2017 um 12:16 schrieb Robert Moskowitz <rgm@xxxxxxxxxxxxxxx>:
On 05/28/2017 04:24 AM, Tony Mountifield wrote:
In article <792718e8-f403-1dea-367d-977b157af82c@xxxxxxxxxxxxxxx>,
Robert Moskowitz <rgm@xxxxxxxxxxxxxxx> wrote:
On 05/26/2017 08:35 PM, Leon Fauster wrote:
drops back to 30! for a few minutes. Sigh.
http://issihosts.com/haveged/
EPEL: yum install haveged
WOW!!!
installed, enabled, and started.
Entropy jumped from ~130 bits to ~2000 bits
thanks
Note to anyone running a web server, or creating certs. You need
entropy. Without it your keys are weak and attackable. Probably
even
known already.
Interesting. I just did a quick check of the various servers I
support,
and have noticed that all the CentOS 5 and 6 systems report
entropy in
the low hundreds of bits, but all the CentOS 4 systems and the one
old
FC3 system all report over 3000 bits.
Since they were all pretty much stock installs, what difference
between
the versions might explain what I observed?
This is partly why so many certs found in the U of Mich study are
weak and factorable. So many systems have inadequate entropy for
the generation of key pairs to use in TLS certs. Worst are certs
created in firstboot process where at times there is no entropy,
but the firstboot still creates its certs.
/var/lib/random-seed and $HOME/.rnd are approaches to mitigate this
scenario.
--
LF
so there are mitigations - the question really is: why hasn't redhat
made these mitigations the default for their enterprise products -
maybe other influences we are unaware of - seems like a huge big
hole. With the advent of SSL/TLS being mandated by google et al,
every device needs access to entropy.
The challenge is this is so system dependent. Some are just fine with
stock install. Others need rng-tools. Still others need haveged. If
Redhat were to do anything, it would be to stop making the default
cert during firstboot. Rather spin off a one-time process that would
wait until there was enough entropy and then create the default cert.
Thing is I can come up with situations were that can go wrong.
There are a lot of best practices with certificates and crypto that
are not apparent to most admins. I know some things for the crypto
work I do (I am the author of the HIP protocol in the IETF). There is
just not one size fits all here, and people need to collect clues
along with random entropy....
OK that makes sense, I've been admin on linux servers for about 18
years, understand the basics, use certificates for web and email
servers. This thread has exposed an area that I'm peripherally aware of
- the need to generate with sufficient entropy the cipher that goes
across the internet in order to avoid an observer being able to reverse
engineer the keys used.
I still fail to see why every server and workstation is not set up to do
this at some minimum level - i guess linux out of the box does this, the
issue is that the minimum from just the basic kernel on most hardware is
too little with today's ability to crack ciphers..
Is there some practical guideline out there that puts this in terms that
don't require a PhD in mathematics to understand and implement.
For instance I have setup and run mail servers for nearly two decades,
only in the last 10+ years with certificates and mandated SSL/TLS - yet
the issue of low random entropy is relevant here but until this thread I
hadn't taken steps to resolve.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos