On Tue, Nov 30, 2021 at 04:45:44PM +0100, Greg Kroah-Hartman wrote: > On Tue, Nov 30, 2021 at 09:31:09AM -0500, Simo Sorce wrote: > > On Tue, 2021-11-30 at 15:04 +0100, Greg Kroah-Hartman wrote: > > > Odds are, you REALLY do not want the in-kernel calls to be pulling from > > > the "random-government-crippled-specification" implementation, right? > > > > You really *do* want that. > > When our customers are mandated to use FIPS certified cryptography, > > they want to use it for kernel cryptography as well, and in general > > they want to use a certified randomness source as well. > > There are huge numbers of internal kernel calls that use random data for > non-crypto things. I think the confusion comes from the use of cryptography to hide the internal state and provide non-predictable sequences, and not from the use of this source to perform cryptography elsewhere. But crypto here, when used, is not a goal but a means. We could call this a "reduction" function or a "whitening" function. Its importance solely depends on how much we want to protect the internal state from being guessed, which first comes back to how long the knowledge of this internal state is useful. If we'd mix completely independent and unpredictable sources like cosmic microwave background noise and sea-level beta radiations, these are constantly renewed, their knowledge doesn't bring anything and there's no need for crypto to protect them. That's not necessarily what we're using and we have to deal with more durable source whose disclosure could have more impact for some time frame, thus would need some protection. As such there is probably a broad spectrum between "we must use strong cryptography on this source hence abide with authorities' decisions" and "we just need this short-lived state not to be trivially guessable till the next call". In this case do we *really* care about what crypto functions are used to hide the internal state ? I guess not really, and that could possibly be configurable at run time. After all, in practice the jitter entropy and other sources might add sufficient uncertainty to complicate analysis of even a weak algorithm and render the internal state hardly guessable. > > Your plan requires an active maintainer that guides these changes and > > interact with the people proposing them to negotiate the best outcome. > > But that is not happening so that road seem blocked at the moment. > > We need working patches that fit with the kernel development model first > before people can start blaming maintainers :) > > I see almost 300 changes accepted for this tiny random.c file over the > years we have had git (17 years). I think that's a very large number of > changes for a 2300 line file that is relied upon by everyone. I'm also having some concerns about this. It seems to me that it's always difficult to *simplify* what we have and that each time we try to replace something in that area we end up with multiple versions. Look at the recent prandom32 stuff for example. We got a new algo used for IP IDs, in a rush, hoping to generalize it to replace the existing Tausworthe one. I had a look a few months ago to try to finish the job... hundreds of callers that make use of the internal state for unit tests :-( Basically unfeasible without breaking lots of driver I have no idea how to test. So by trying to replace something we just ended up with two implementations (and if I remember well there were already a few more, mostly variants of the former). A replacement ought to be an observation, a conclusion of a work well done, not a goal. If the changes manage to move everyone in the right direction and at the end everything is seamlessly replaced for good, that's awesome. But changing for changing is hard. And if we end up with build time options to decide between one solution and the other, we fragment the testability :-/ Just my two cents, Willy