> > Yes, but the right way to do that is to lock the chacha context > > in the reseed function and call extract_buf() while that lock > > is held. I'll send a patch for that soon. > > Extract_buf() is supposed to be able to reliably generate high quality > randomness; that's why we use it for the chacha reseed. If > extract_buf() can return return the same value for two parallel calls > to extract_buf(), that's a Bad Thing. I agree completely. > For example, suppose there were > two chacha contexts reseeding using extract_buf(), and they were > racing against each other on two different CPU's. Having two of them > reseed with the same value would be a cryptographic weakness. This confuses me a bit. Are you saying two CPUs can have different primary chacha contexts but reseed from the same input pool? Why? Reading the code, I thought there'd be only one primary crng & others would reseed from it. > NACK to both patches. OK, but as Mike wrote in the thread he started about his proposed lockless driver: " It is highly unusual that /dev/random is allowed to degrade the " performance of all other subsystems - and even bring the " system to a halt when it runs dry. No other kernel feature " is given this dispensation, I don't think a completely lockless driver is at all a good idea & I think he overstates the point a bit in the quoted text. But I do think he has a point. Locking the input pool while extract_buf() reads & hashes it seems wrong to me because it can unnecessarily block other processes. crng_reseed() already locks the crng context. My patch (which I probably will not now write since it has already got a NACK) would make it call extract_buf() while holding that lock, which prevents any problem of duplicate outputs but avoids locking the input pool during the hash. If my proposed patch would be unacceptable, it seems worth asking if there is a better way to eliminate the unnecessary lock.