Re: [PATCH] random: add chacha8_block and swtich the rng to it

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My 2 cents:

As a cryptanalyst, having discovered the 2008 attack on ChaCha that's
only been slightly improved in 16 years: the 20-round ChaCha20is a
clear waste of CPU cycles, but ChaCha8 is admittedly risky, though
more in terms of PR than pure crypto merits (plus, afaiu the threat
model of ChaCha in the Linux PRNG doesnt allow the kind of chosen-IV
"attack" known to work on reduced-round versions).

Switching from ChaCha20 to ChaCha12 might still raise eyebrows but I
dont think any respectable crypto/security expert will suspect a
JiaTan situation.

On Wed, May 1, 2024 at 2:28 PM Theodore Ts'o <tytso@xxxxxxx> wrote:
>
> So first of all, my apologies for giving you offense.  I really didn't
> think you were a shill for the NSA or the MSS, but I have to admit
> that when I get large set of patches which removes "unnecessary" code,
> which is _technically_ safe, but which reduces the safety margin, I
> find myself wondering whether it's part of a binary payload.  (This is
> especially when I get patches from someone that I don't normally
> receive patches from.)  Unfortunately, in the wake of the xz hack,
> we're just all going to have to be a lot more careful.
>
> On Tue, Apr 30, 2024 at 10:44:09AM -0600, Aaron Toponce wrote:
> >
> > The goal is just to make the CSPRNG more efficient without sacrificing security.
> > Of course most reads will be small for cryptographic keys. ChaCha8 means even
> > those small reads will be 2.5x more efficient than ChaCha20. The dd(1) example
> > was just to demonstrate the efficiency, not to be "fun".
>
> This is a philosophical question; are we going for maximum efficiency,
> or maximum safety so long as it meets the performance requirements for
> the intended use case?  From an academic perspective, or if a
> cryptographer is designing cipher for a NIST competition, there's a
> strong desire for maximum efficiency, since that's one of the metrics
> used in the competition.  But for the Linux RNG, my bias is to go for
> safety, since we're not competing on who can do the fast bulk
> encryption, but "sufficiently fast for keygen".
>
> People of good will can disagree on what the approach should be.  I
> tend to have much of a pragmatic engineer's perspective.  It's been
> said that the Empire State Building is overbuilt by a factor of 10,
> but that doesn't bother me.  People are now saying that perhaps the
> Francis Scott Key bridge, when it is rebuilt, should have more safety
> margin, since container ships have gotten so much bigger.  (And
> apparently, cheap sh*t diesel fuel that is contaminated and the ship
> owners buy fuel from the lowest bidder.)
>
> Or we can talk about how Boeing has been trying to cheap-out on plane
> manufacturing to save $$$; but I think you get the point of where I'm
> coming from.  I'm not a big fan of trimming safety margins and making
> things more efficient for it's own sake.  (At least in the case of
> Boeing, the CEO at least got paid $22/million a year, so at least
> there's that.  :-)
>
> Now, if this is actually impacting the TLS connection termination for
> a Facebook or Bing or Google's front end web server, then great, we
> can try to optimize it.  But if it's not a bottleneck, what's the
> point?  Making change for change's sake, especially when it's reducing
> safety margins, is just one of those things that I find really hard to
> get excited about.
>
> Cheers,
>
>                                         - Ted





[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux