One basic question... why limit this to /dev/random? If we're trying to avoid fd exhaustion attacks, wouldn't an "atomically read a file into a buffer" system call (that could be used on /dev/urandom, or /etc/hostname, or /proc/foo, or...) be more useful? E.g. ssize_t readat(int dirfd, char const *path, struct stat *st, char *buf, size_t len, int flags); It's basically equivalent to openat(), optional fstat() (if st is non-NULL), read(), close(), but it doesn't allocate an fd number. Is it necessary to have a system call just for entropy? If you want a "urandom that blocks until seeded", you can always create another device node for the purpose. > The main argument I can see for putting in a limit is to encourage the > "proper" use of the interface. In practice, anything larger than 128 > probably means the interface is getting misused, either due to a bug > or some other kind of oversight. Agreed. Even 1024 bits is excessive. 32 bytes is the "real" maximum that people should be asking for with current primitives, so an interface limitation to 64 is quite defensible. (But 128 isn't *wildly* excessive.) If you do stick with a random-specific call, specifying the entropy in bits (with some specified convention for the last fractional byte) is anothet interesting idea. Perhaps too prone to bugs, though. (People thinking it's bytes and producing low-entropy keys.) -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html