On Tue, 19 Apr 2022, Jason A. Donenfeld wrote:
For situations in which we don't have a c0 counter register available, we've been falling back to reading the c0 "random" register, which is usually bounded by the amount of TLB entries and changes every other cycle or so. This means it wraps extremely often. We can do better by combining this fast-changing counter with a potentially slower-changing counter from random_get_entropy_fallback() in the more significant bits. This commit combines the two, taking into account that the changing bits are in a different bit position depending on the CPU model. In addition, we previously were falling back to 0 for ancient CPUs that Linux does not support anyway; remove that dead path entirely.
Tested-by: Maciej W. Rozycki <macro@xxxxxxxxxxx> I've pushed the algorithm through testing with a number of suitable systems: - an R2000A and an R3000A with no timer of any kind, only jiffies, - an R3400 with a chipset timer only, - an R4400SC with a usable buggy CP0 counter and a chipset timer, - a 5Kc with a good CP0 counter only, with no obvious issues spotted. Thank you for working on this! Maciej