On Sun, Sep 15, 2019 at 10:32:15AM -0700, Linus Torvalds wrote: > * We will block for at most 15 seconds at a time, and if called > * sequentially will decrease the blocking amount so that we'll > * block for at most 30s total - and if people continue to ask > * for blocking, at that point we'll just return whatever random > * state we have acquired. I think that the exponential decay will either not be used or be totally used, so in practice you'll always end up with 0 or 30s depending on the entropy situation, because I really do not see any valid reason for entropy to suddenly start to appear after 15s if it didn't prior to this. As such I do think that a single timeout should be enough. In addition, since you're leaving the door open to bikeshed around the timeout valeue, I'd say that while 30s is usually not huge in a desktop system's life, it actually is a lot in network environments when it delays a switchover. It can cause other timeouts to occur and leave quite a long embarrassing black out. I'd guess that a max total wait time of 2-3s should be OK though since application timeouts rarely are lower due to TCP generally starting to retransmit at 3s. And even in 3s we're supposed to see quite some interrupts or it's unlikely that much more will happen between 3 and 30s. If the setting had to be made user-changeable then it could make sense to let it be overridden on the kernel's command line though I don't think that it should be necessary with a low enough value. Thanks, Willy