Hi Folks, It seems that Anna is not a member of the openssh-unix-dev mailing list, so her message bounced from there. I am adding this copy of the message for future readers of this thread. -- Mark ------- forwarded message ------- From: Anna Johnston <amj@xxxxxxxxxxx> To: "openssh-unix-dev@xxxxxxxxxxx" <openssh-unix-dev@xxxxxxxxxxx> Subject: Re: DH Group Exchange Fallback Date: Tue, 26 Sep 2017 19:24:09 +0000 The main threat against finite field discrete logarithm based Public Key Cryptography (PKC) is the number field sieve (NFS). Once a prime is broken, it is broken everywhere. Primes are cryptographic security parameters. They should be different for different applications and should be changed regularly, just as you would a cryptovariable (cryptographic key). Using fixed primes is even used as a rationale for why similarly sized RSA is more secure than finite field DSA or ElGamal in the recent discrete log factoring record paper. This is on page 2 of: https://eprint.iacr.org/2017/067.pdf . They say this even though factoring is easier than computing a discrete log, a generally accepted fact with anyone who has implemented these algorithms. This is mentioned on page 8 of this paper. Pocklington's theorem not only efficiently generates primes (only 2 exponentiations to verify primality), but generates an element of large prime order and a primality proof much smaller in size and much more efficient than Elliptic Curve primality proofs (ECPP). The size of a proof for a 2^k-bit prime would be around 2^{k+1}-1bits, and there may be ways to make the size of the proof even smaller. The theorem (used for primality proof and generation) is generalizable for any prime, as long as the factorization of about half the bits (or less with a bit more work) of P-1 is known. The probability of the algorithm generating the same prime is near zero as long as a decent random number generator is used to seed the prime generation algorithm. This seed could even come from a previous primes DH key exchange. Safe primes are rare compared to the number of similarly sized cryptographically secure primes. Their rarity and special form could expose them to pre-computation or special attacks compared to their more general counter parts. Side note: Note that the paper mentioned above also highlights the flawed nature of current costings of the NFS for discrete logs (DL) for larger (i.e., >1024) bit moduli. Old costings discounted the cost of the linear algebra, claiming the cost of the discrete logarithms (DL) and factoring NFS was equal. The cost of the Discrete Log NFS for larger primes is driven as much, if not more, by the linear algebra step as by the sieving. Current records, including the 768-bit DL, shift work away from the linear algebra and on to the sieve to reduce cost of the linear algebra and the algorithm overall. The linear algebra generally requires expensive GPU's and isn't massively parallel. On 9/25/17, 20:51, "Daniel Kahn Gillmor" <dkg@xxxxxxxxxxxxxxxxx> wrote: On Sun 2017-09-24 22:54:12 -0700, Mark D. Baushke wrote: > The whole point of using ephemeral safe primes a la RFC 4419, is to make > it too difficult to use pre-calculatation tables. It seems reasonable to > suppose that there are less safe primes in a bit range than other > primes, including provable primes. > > To be honest, I am uneasy about continuing to use Sophie-Germain and > safe primes exclusively. I don't understand this part. you want to defend against an attacker with cryptanalytic capabilities by *not* using safe primes? Are you expecting everyone operating a server to choose their own random parameters, and to generate primality proofs for them? what's stopping time-starved admins from just copying someone else's group parameters and the associated proofs, thereby creating a de facto common group that's worth mounting a pre-calculation attack for? How are you expecting clients to verify that the groups they're using, while maybe not safe primes, are not in wide use elsewhere? are they supposed to track a list of of parameters they've seen and reject them once they start seeing "too many" of them? > For my effort, I would find it 'better' to consider moving to provable > primes. Of course, that would mean sending all three of g,p,q to the > client for them to validate that the primes are safe using something > like Pocklington's Theorem. This should be fairly quick as such things > go. It does mandate a change to the protocol to send q over the wire > too. are you talking about shipping the proofs in addition to shipping the group parameters? The sizes of the primality proofs of even a 2048-bit prime is about 50KB: https://urldefense.proofpoint.com/v2/url?u=https-3A__dkg.fifthhorseman.net_ffdhe-2Dprimality-2Dproofs_&d=DwIBAg&c=HAkYuh63rsuhr6Scbfh0UjBXeMK-ndb3voDTXcWzoCI&r=Iub9JP7tN0by3l0st2X-IA&m=K_1lOpfH7dfJZHwNGjmwWL05BAGMKpRLS9BGZ4ZQUbk&s=Icjv_bMefGCdGgwNgWRVMRv2-iZScm_RxaXFugfGxMw&e= They get *much* larger as the size of the primes grows. And verifying them (while not nearly as expensive as generating them) is not something i can imagine any ssh client actually doing before making a connection -- i mean, why should they, when it's cheaper and quicker to just "trust the server" ? (and that's to say nothing about the efficiency gains for the defenders of having pre-computed tables for a known group that can accellerate the fixed-base exponentiation part of DH) Using an a-priori-known, well-vetted, strong group parameters seems like a more reliable approach for the ecosystem as a whole. The proposal being explored in this thread does not seem like a good idea. --dkg _______________________________________________ openssh-unix-dev mailing list openssh-unix-dev@xxxxxxxxxxx https://lists.mindrot.org/mailman/listinfo/openssh-unix-dev