Passwords over the wire and WebAuthn woes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, yes, I know this is not w3c, hear me out if you will.

A while back, I started trying to implement a webauthn client and server to test it out to see if it was really the answer to our HOBA (rfc 7486) experimental rfc. The answer seems to be yes and no. From what I can tell webauthn is extremely centered on solving the problem of using crypto frobs to sign bits to be put on the wire. Local credential stores (ie, private keys sitting on your disk) seem to have been either out of scope, or mostly ignored. I tried to get it to work on Chrome and Firefox, and it seems that Chrome doesn't have the ability at all, and Firefox requires an about:flag that enables it. I was not able to get it to work.

This is really dismaying because local credential stores are completely adequate for a huge number of cases. If deployed it could substantially reduce the reuse of passwords, allowing users to just remember one really good local password to open their credential store.  The other part about introducing crypto frobs is that it makes the entire protocol much more difficult to understand and deploy. I wrote code way back then with the HOBA stuff, and webauthn was really hard to understand even though I knew what was happening at a high level. W3C has since then also released WebCrypto which gives apps the ability to roll their own public key auth between browser clients and server auth backends. I had implement some flows between client and server for joins, logins, and enrolling new devices once you have joined. As it turned out, the actual code to implement it was really straightforward: it took me about a day to back integrate webcrypto into my old krufty JS RSA libraries that I had scrounged years ago.

So here's the question: the flows that I created are definitely over the wire. But they are over the wire between really one party, the web site owner, since they control the code (= server, client js) on both ends. However as everybody knows, security is not easy so getting those flows *correct* is very hard. I have some experience here, and it's mainly telling me that I'm sure I got things wrong. So  what is the policy within IETF where a site could roll their own, but really shouldn't because it ought to be vetted?  Is standardizing such a thing in scope in IETF or other standards bodies? Because at its heart is not interoperability across implementation, but vetting a security design that goes over the wire.

Mike

PS: I am in no way bashing on w3c. they solved a hard problem, but it seems they left out or ignored the one I was hoping they'd solved along the way.

PPS: I have an implementation of this running, and a github repo if anybody is interested in seeing what I did




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux