Hi Maciej, On Thu, Apr 14, 2022 at 02:16:18AM +0100, Maciej W. Rozycki wrote:
Yes, for the relevant CPUs the range is 63-8 << 8 for R3k machines and 47-0 (the lower bound can be higher if wired entries are used, which I think we occasionally do) for R4k machines with a buggy CP0 counter. So there are either 56 or up to 48 distinct CP0 Random register values.
Ahh interesting, so it varies a bit, but it remains rather small.
It depends on the exact system. Some have a 32-bit high-resolution counter in the chipset (arch/mips/kernel/csrc-ioasic.c) giving like 25MHz resolution, some have nothing but jiffies.
Alright, so there _are_ machines with no c0 cycles but with a good clock. Yet, 25MHz is still less than the cpu cycle, so this c0 random ORing trick remains useful perhaps.
It seems like a reasonable idea to me, but the details would have to be sorted out, because where a chipset high-resolution counter is available we want to factor it in, and otherwise we need to extract the right bits from the CP0 Random register, either 13:8 for the R3k or 5:0 for the R4k.
One thing we could do here that would seemingly cover all the cases without losing _that_ much would be: return (random_get_entropy_fallback() << 13) | ((1<<13) - read_c0_random()); Or in case the 13 turns out to be wrong on some hardware, we could mitigate the effect with: return (random_get_entropy_fallback() << 13) ^ ((1<<13) - read_c0_random()); As mentioned in the 1/xx patch of this series, random_get_entropy_fallback() should call the highest resolution thing. We then shave off the least-changing bits and stuff in the faster-changing bits from read_c0_random(). Then, in order to keep it counting up instead of down, we do the subtraction there. What do you think of this plan? Jason