On Sat, Dec 11, 2021 at 04:45:55PM +0100, Thomas Schoebel-Theuer wrote: > 4) Collection of entropy vs consumption of entropy: the old /dev/random has > an important feature for me: any _mass_ usage by whatever class of users > (whether tenthousands of UIDs per server and/or HTTP/second, or maybe even > some privileged orchestration scripts) would _consume_ masses of entropy. > When suchalike consumption would exceed the production rate, the old > /dev/random would become so slow that our internal monitoring processes > would certainly alert, and consequently would hint our responsibles (located > at other teams) at the problem. I'm sorry but I cannot agree with you on this. You are claiming that your monitoring processes are so limited that the only situation they can discover is when the machine is basically dead. There are plenty of users who end up replacing /dev/random with /dev/urandom in production to make sure a terrible service outage never happens again, and one important feature of an RNG is its performance, particularly when it's shared between processes and users. The fact that your monitoring only triggers when the system becomes unusable is a proof that it must be fixed, and certainly not an indication that any possible kernel limitation you're benefitting from does not deserve to be addressed. Willy