Long time to standard (was: Re: Google, etc, and their proprietary protocols)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 11/27/2020 8:00 PM, Theodore Y. Ts'o wrote:
On Fri, Nov 27, 2020 at 10:56:14AM -0500, Warren Kumari wrote:
Publishing documentation on how to use an API on the
Google (internal) site is easy; standardizing through the IETF process
takes many months/years, often requiring travel, extensive time on
mailing lists, etc. One exception to this is QUIC. QUIC was started in
Google, but, because of the broad impact and interoperability to the
core plumbing of the Internet, it was clear that bringing it to the
IETF would result in a more widely deployed and used protocol.
One thing that's worth noting about QUIC is that it first appeared in
Google's Chrome in 2013; it entered the IETF standardization process
in 2015, with the working group established in 2016.  And as of 2020,
QUIC is finally in Last Call.
Sometimes, the process is to blame. Sometimes, writing the spec just requires a long time. In the case of QUIC, there was quite a bit of the latter. The working group engaged into what amounts to a discovery process, iterating the design as issues surfaced in early deployments and interoperability tests. Doing something like QUIC was not exactly simple.
No doubt QUIC has improved significantly in the intervening four
years.  But if you are trying to ship product for the Christmas
holiday season, it's hard when the standardization process takes years
and the timeline is not under a company's control.  So sure, you can
ship versions based on some draft version (CloudFlare was deploying
something based on QUIC I-D version 14 as a beta in 2018), but there
will almost certainly be interoperability problems if different
companies are shipping based on different versions of the draft spec
--- in which case the benefits of standardization will have been
seriously diluted.

In fact, Google kept shipping successive versions of "Google QUIC" in their products during all these years, progressively integrating parts of the IETF design in their code, until arriving at interoperability with the IETF specification a few months ago, following draft-29. Other companies followed the same pattern, deploying successive drafts in controlled environments such as on an internal "back end" network, or between a proprietary app an their servers. They learned a lot in the process, and contributed to the standardization.

One way the working group dealt with that was by conducting regular interoperability tests based on specific draft numbers, which ensured that some draft versions will be well supported by many implementations. Another way was the support of version negotiation in the protocol. But mostly, the rule was to only deploy interim QUIC versions on systems that supported frequent software upgrades.

I don't know whether this will apply to other IETF endeavors, but it is probably worth documenting.

-- Christian Huitema



[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux