Jakob Bohm via openssl-users wrote in <23f8b94d-0078-af3c-b46a-929b9d005\ 4ea@xxxxxxxxxx>: |On 28/05/2019 23:48, Steffen Nurpmeso wrote: |> Jay Foster wrote in <84571f12-68b3-f7ee-7896-c891a2e253e7@xxxxxxxxxxxxxx\ |> >: |>|On 5/28/2019 10:39 AM, Jay Foster wrote: |>|> I built OpenSSL 1.1.1c from the recent release, but have noticed what |>|> seems like a significant performance drop compared with 1.1.1b. I |>|> notice this when starting lighttpd. With 1.1.1b, lighttpd starts in a |>|> few seconds, but with 1.1.1c, it takes several minutes. ... |>|I think I have tracked down the change in 1.1.1c that is causing this. |>|It is the addition of the DEVRANDOM_WAIT functionality for linux in |>|e_os.h and crypto/rand/rand_unix.c. lighttpd (libcrypto) is waiting in |>|a select() call on /dev/random. After this eventually wakes up, it then |>|reads from /dev/urandom. OpenSSL 1.1.1b did not do this, but instead |>|just read from /dev/urandom. Is there more information about this |>|change (i.e., a rationale)? I did not see anything in the CHANGES file |>|about it. ... |> I do not know why lighttpd ends up on /dev/random for you, but in |> my opinion the Linux random stuff is both sophisticated and sucks. P.S.: i have now looked at the OpenSSL code and understand what you have said. It indeed selects on /dev/random. |> The latter because (it seems that many) people end up using |> haveged or similar to pimp up their entropy artificially, whereas |> on the other side the initial OS seeding is no longer truly |> supported. Writing some seed to /dev/urandom does not bring any |> entropy to the "real" pool. |Something equivalent to your program (but not storing a bitcount field) |used to be standard in Linux boot scripts before systemd. But it |typically used the old method of just writing the saved random bits |into /dev/{u,}random . Oh, still, for example AlpineLinux did (and still does i think, using a script originating from Gentoo aka OpenRC) save a kilobyte of /dev/urandom storage, to restore it upon next boot. But it does not feed the pool which feds /dev/random, it does not count against /proc/sys/kernel/random/entropy_avail. Even that i can understand a little bit (physical access would reveal data stored in the entropy file), even though the entropy is not used but passed through state machines, which could be furtherly randomized when fed back in, like also dependent on the host hardware environment interrupts which happen and depend on actual devices i'd say, and while doing so. But you loose all the entropy that the machine collected during its last uptime, so you solely depend on some CPU features and the noise that system startup produces to create a startup entropy. After running in the problem and looking around i realized that many people seem to run the haveged daemon (there is also a kernel module which does something like this, but using it did not help me out), which applies some maths, and it is mystic as it can produce thousands of random bits in less than a second! Even on my brand new laptop, which (stepped a decade of hardware development for me and) has a 8th generation i5, i see hangs of several seconds (iirc) without the little helper i attached in the last message. With it i have a (SysV init/BSD rc script aka CRUX-Linux) boot time of two seconds, which is so horny i have to write it down. |This makes me very surprised that they removed such a widely used |interface, can you point out when that was removed from the Linux |kernel? Hm, ok, what they have actually removed was the RNDGETPOOL ioctl(2) (according to random(4)). Then my claim regarding deprecation was misleaded and wrong. Nonetheless it has to be said that today an administrator does not have, that is, i have no idea whether systemd provides something to overcome this, the possibility to simply feed in good entropy via a shell script, unless i am mistaken. --steffen | |Der Kragenbaer, The moon bear, |der holt sich munter he cheerfully and one by one |einen nach dem anderen runter wa.ks himself off |(By Robert Gernhardt)