What is the long term plan for Internet evolution?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The Internet is a little over 40 years old. It has grown from a research network connecting ten institutions to a global infrastructure connecting tens of billions of devices.

My main frustration with the state of IETF today is that many if not most of the serious challenges we face today are issues we had identified as problems 20 years ago. They didn't get addressed then because any change would 'take at least five years to deploy'. So here we are still looking at the same set of problems. 

In the security area, only TLS and PKIX can be considered to have achieved Internet scale success. Use of OpenPGP, S/MIME, IPSEC, DNSSEC, DPRIV, DANE is significant by the standards of 1995 but nowhere near ubiquitous. 

Users are still second class citizens in the Internet infrastructure. Institutions get names, machines get names. People get accounts that are second class identifiers bound to the name of a host or an organization. Alice cannot have autonomy as alice@xxxxxxxxxxx unless she actually owns example.com. And nobody can own example.com because DNS names are rented not sold. And no, anyone claiming $250,000 for a TLD is anything more than a shakedown is gaslighting. 

IPv6 is slowly deploying but that is only because the pain of IPv4 address exhaustion is starting to become serious. Meanwhile all our applications now run over HTTP and not because HTTP is designed to do any of the things that are needed for application transactions or telemetry. The reason we run application services over HTTP is really a matter of inertia and the fact that there are simply not enough ports for static port assignments to be viable.

Everyone can see that HTTP/2 and QUIC are an improvement on HTTP/1.1 over TCP, at least for the intended application of browsing the Web. They get the attention because the Web is the biggest, most successful part of the Internet. But what about the parts of the infrastructure that don't work so well. How do we get to fix some of those?

We do not need an Internet/2.0. Most of the basic architecture of the Internet still applies. And especially if people actually talk to Dave Clark et. al. rather than accepting the rigid ideological interpretations of their work from 40 years ago as being inscribed on tablets of stone. But we do need the Internet to evolve from its current state as a 30 year old advanced engineering prototype. 

What we need to do in my view is to apply a term that was fashionable back in the mid 90s when the Web came to MIT: Re-engineering. We have re-engineered HTTP and TCP, how about taking a look at the rest of the stack?

I have spent the past two and a half years doing exactly that and I think I have come up with quite a few areas where we can improve things. But I am probably not the only person doing that (well I hope not) and I am certainly not the only person with ideas. 


PHB

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux