Re: [PATCH, RFC] random: introduce getrandom(2) system call

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Do, 2014-07-17 at 08:52 -0400, Theodore Ts'o wrote:
> On Thu, Jul 17, 2014 at 12:57:07PM +0200, Hannes Frederic Sowa wrote:
> > 
> > Btw. couldn't libressl etc. fall back to binary_sysctl
> > kernel.random.uuid and seed with that as a last resort? We have it
> > available for few more years.
> 
> Yes, they could.  But trying to avoid more uses of binary_sysctl seems
> to be a good thing, I think.  The other thing is that is that this
> interface provides is the ability to block until the entropy pool is
> initialized, which isn't a big deal for x86 systems, but might be
> useful as a gentle forcing function to force ARM systems to figure out
> good ways of making sure the entropy pools are initialized (i.e., by
> actually providing !@#!@ cycle counter) without breaking userspace
> compatibility --- since this is a new interface.

I am not questioning this new interface - I like it - just wanted to
mention there is already a safe fallback for LibreSSL in the way they
already seem to do it in OpenBSD (via sysctl).

> 
> > > +	if (count > 256)
> > > +		return -EINVAL;
> > > +
> > 
> > Why this "arbitrary" limitation? Couldn't we just check for > SSIZE_MAX
> > or to be more conservative to INT_MAX?
> 
> I'm not wedded to this limitation.  OpenBSD's getentropy(2) has an
> architected arbitrary limit of 128 bytes.  I haven't made a final
> decision if the right answer is to hard code some value, or make this
> limit be configurable, or remote the limit entirely (which in practice
> would be SSIZE_MAX or INT_MAX).
> 
> The main argument I can see for putting in a limit is to encourage the
> "proper" use of the interface.  In practice, anything larger than 128
> probably means the interface is getting misused, either due to a bug
> or some other kind of oversight.
> 
> For example, when I started instrumenting /dev/urandom, I caught
> Google Chrome pulling 4k out of /dev/urandom --- twice --- at startup
> time.  It turns out it was the fault of the NSS library, which was
> using fopen() to access /dev/urandom.  (Sigh.)

In the end people would just recall getentropy in a loop and fetch 256
bytes each time. I don't think the artificial limit does make any sense.
I agree that this allows a potential misuse of the interface, but
doesn't a warning in dmesg suffice?

It also makes it easier to port applications from open("/dev/*random"),
read(...) to getentropy() by reusing the same limits.

I would vote for warning (at about 256 bytes) + no limit.

Thanks,
Hannes


--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]

  Powered by Linux