G'day.
Always fun to watch an exchange among entrenched perspectives...
On 2/14/2012 9:31 AM, Bob Braden wrote:
However, Vint Cerf, the ARPA program manager, rules against variable length
addresses and decreed the fixed length 32 bit word-aligned addresses of RFC
791. His argument was that TCP/IP had to be simple to implement if it were to
succeed (and survive the juggernaut of the ISO OSI protocol suite).
Experience with development, deployment and operations using a core construct is
not irrelevant when trying to upgrade an infrastructure function.
As I recall, there was essentially no experience with variable length addresses
-- and certainly no production experience -- then or even by the early 90s, when
essentially the same decision was made and for essentially the same reason.[1]
It's not that variable length addressing is a bad idea; it's that it didn't get
the research work and specification detail it needed, for introduction into what
had become critical infrastructure. What I recall during the IPng discussions
of the early 90s was promotion of the /concept/ of variable length addressing
but without the experiential base to provide assurance we knew how it would operate.
Clearly the motivation for variable-length addresses is a Good Thing, since
having it work properly would save needing to do an infrastructure change to
quadruple the address space every 2-3 decades. (With 128-bit addressing,
perhaps it will be longer; with the rapid expansion of the Internet, perhaps it
will be sooner.)
But really I'll suggest that fixed-vs-variable addressing is essentially
irrelevant to the question of transition ease and backward compatibility. What
mostly affects that is effort. Development, deployment and operations effort.
For an established infrastructure, the more a change is different from what
already is used, the more effort it takes to introduce the new thing. (Versions
of this point have been made on this thread repeatedly by other folk, and mostly
everyone is ignoring the premise. I'll add that for folk who have noted the
potential significance of my occasional consonance with the views of John
Klensin, please be aware that its occurrence with Randy Bush is considerably
more rare...)
Among the arguments being used to miss the point are:
On 2/14/2012 2:34 PM, Brian E Carpenter wrote:
I'm sorry, but*any* coexistence between RFC791-IPv4-only hosts and
hosts that are numbered out of an address space greater than 32 bits
requires some form of address sharing, address mapping, and translation.
It doesn't matter what choice we made back in 1994. Once you get to the
point where you've run out of 32 bit addresses and not every node can
support>32 bit addresses, you have the problem.
This notes an objective reality about a limit, while entirely missing the
potential benefit available until reaching it. In this case, that was a 15-20
year window.
If the design goal had been restricted to the original "increase the address
space bits", without encumbering it with additional architecture and functional
changes, and had been based on re-use of the existing IPv4 scheme, the initial
upgrade could have:
a) been with a module tacked on to existing IPv4 implementations and
installed as a relatively minor software upgrade (albeit with some additional
API hooks) to most implementations, rather than the traumatic installation of an
entirely new stack
b) used trivial format-mapping translators rather than having to deal with
the software and operational complexities of gateways that have to reconcile
independent address spaces and other functionality
c) had essentially no incremental deployment or operations costs
This would have gotten the core of the larger address space mechanism deployed
and operational long before it was needed, and the focus after that would have
been restricted to /using/ the additional bits, rather than trying to solve a
variety of additional problems.
Moving from re-using IPv4 addressing to expansion into new IPv6 bits for
addressing could have then been a separate, incremental step. An important one,
certainly. A step with its own challenges, sure. But incremental and narrow.
The classic project management point for major change is to minimize critical
dependencies. Instead, the IPv6 increased them.
On 2/14/2012 3:56 PM, Bob Hinden wrote:
The deployment problem was not due to technical issues, it was because the Internet changed to only deploy new technology that generated revenue in the short term.
The Arpanet did not convert to production use of TCP/IP until it was forced to
by a central authority in 1983. Long before 'revenue' was an issue.
Absent coercion like this, organizations do not incur the considerable cost of
infrastructure change in the absence of a strongly-perceived need that will have
reasonably immediate and significant benefit. This is one of the very basic
reasons that any change targeting longer-term benefit and with little-to-no
short-term benefit needs to be made as painless as possible. In this case,
painless would have meant being as compatible as possible, with small increments
of change.
That is, I claim that absent well-funded research efforts and/or the leverage of
a central authority, the history of the Net has /always/ been to focus on
immediate need, not long-term benefit. (This includes the original motivations
for inventing packet-switching. Lofty long-term visions notwithstanding, it was
overcoming expensive [and fragile] long-haul telecom costs and
hostile-environment outages that motivated the work. )
On 2/14/2012 7:04 PM, Brian E Carpenter wrote:
>> You would not have two distinct routing tables for two independent
>> Internets, but a single routing table for a single Internet.
>
> True, but why is this a particular advantage? It wouldn't have
> affected the need for an update to BGP4, for example.
With the sort of highly-constrained upgrade I've described above, no immediate
change to BGP would have been required. That could have been deferred to a
follow-on effort. The same applies to DHCP re-use.
On 2/14/2012 3:26 PM, Mark Andrews wrote:
Happy eyeballs just points out problems with multi-homing in general.
Multi-homing is a problem above the IP layer, not at it. As classically noted,
the problem starts with using addresses as identifiers.
d/
[1] Unless I've misread, my brother and Scott have exactly opposite memories of
which constituencies ere lobbying for and against variable-length addressing.
--
Dave Crocker
Brandenburg InternetWorking
bbiw.net
_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf