Re: Linux 5.3-rc8

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



18.09.2019 18:59, Alexander E. Patrakov пишет:
18.09.2019 18:38, Lennart Poettering пишет:
On Di, 17.09.19 19:29, Willy Tarreau (w@xxxxxx) wrote:

What do you expect these systems to do though?

I mean, think about general purpose distros: they put together live
images that are supposed to work on a myriad of similar (as in: same
arch) but otherwise very different systems (i.e. VMs that might lack
any form of RNG source the same as beefy servers with muliple sources
the same as older netbooks with few and crappy sources, ...). They can't
know what the specific hw will provide or won't. It's not their
incompetence that they build the image like that. It's a common, very
common usecase to install a system via SSH, and it's also very common
to have very generic images for a large number varied systems to run
on.

I'm totally file with installing the system via SSH, using a temporary
SSH key. I do make a strong distinction between the installation phase
and the final deployment. The SSH key used *for installation* doesn't
need to the be same as the final one. And very often at the end of the
installation we'll have produced enough entropy to produce a correct
key.

That's not how systems are built today though. And I am not sure they
should be. I mean, the majority of systems at this point probably have
some form of hardware (or virtualized) RNG available (even raspi has
one these days!), so generating these keys once at boot is totally
OK. Probably a number of others need just a few seconds to get the
entropy needed, where things are totally OK too. The only problem is
systems that lack any reasonable source of entropy and where
initialization of the pool will take overly long.

I figure we can reduce the number of systems where entropy is scarce
quite a bit if we'd start crediting entropy by default from various hw
rngs we currently don't credit entropy for. For example, the TPM and
older intel/amd chipsets. You currently have to specify
rng_core.default_quality=1000 on the kernel cmdline to make them
credit entropy. I am pretty sure this should be the default now, in a
world where CONFIG_RANDOM_TRUST_CPU=y is set anyway. i.e. why say
RDRAND is fine but those chipsets are not? That makes no sense to me.

I am very sure that crediting entropy to chipset hwrngs is a much
better way to solve the issue on those systems than to just hand out
rubbish randomness.

Very well said. However, 1000 is more than the hard-coded quality of some existing rngs, and so would send a misleading message that they are somehow worse. I would suggest case-by-case reevaluation of all existing hwrng drivers by their maintainers, and then setting the default to something like 899, so that evaluated drivers have priority.


Well, I have to provide another data point. On Arch Linux and MSI Z87I desktop board:

$ lsmod | grep rng
<nothing>
$ modinfo rng_core
<yes, the module does exist>

So this particular board has no sources of randomness except interrupts (which are scarce), RDRAND (which is not trusted in Arch Linux by default) and jitter entropy (which is not collected by the kernel and needs haveged or equivalent).

--
Alexander E. Patrakov

Attachment: smime.p7s
Description: Криптографическая подпись S/MIME


[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux