Pavel. I have bit 'ol enterprise daemon running with established file descriptors serving thousands of connections which periodically require entropy. Now I run out of descriptors. I can't establish new connections. but I should now halt all the other ones that require entropy? I should raise SIGKILL on my process serving these thousands of connetions? I don't think so. On Wed, Jul 30, 2014 at 6:26 AM, Pavel Machek <pavel@xxxxxx> wrote: > Hi! > >> The rationale of this system call is to provide resiliance against >> file descriptor exhaustion attacks, where the attacker consumes all >> available file descriptors, forcing the use of the fallback code where >> /dev/[u]random is not available. Since the fallback code is often not >> well-tested, it is better to eliminate this potential failure mode >> entirely. > > I'm not sure I understand the rationale; if someone can eat all your > file descriptors, he can make you stop working. So you can just stop > working when you can't open /dev/urandom, no? > > Fallback code is probably very bad idea to use... > >> The other feature provided by this new system call is the ability to >> request randomness from the /dev/urandom entropy pool, but to block >> until at least 128 bits of entropy has been accumulated in the >> /dev/urandom entropy pool. Historically, the emphasis in the >> /dev/urandom development has been to ensure that urandom pool is >> initialized as quickly as possible after system boot, and preferably >> before the init scripts start execution. > > Sounds like ioctl() for /dev/urandom for this behaviour would be nice? > > Pavel > -- > (english) http://www.livejournal.com/~pavelmachek > (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html -- To unsubscribe from this list: send the line "unsubscribe linux-api" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html