>> Brian - is it provable that no design for a follow-on to IPv4 would >> have provided that backward compatibility? Or were there >> architectural and engineering decisions that chose other features >> over backward compatibility? > > > 1. Take the original, simple Deering specification. > > 2. Declare the initial IPv6 address space as being the current > IPv4 address space, with all upper bits zero. > > 3. The requirement for connecting a v6 stack to a v4 stack is a > very simple IP header-mapping translation, with no loss of information > at the IP level. > > 4. The v6 stack would need to have a v4 mode, for use by v4 > applications -- applications that use v4 addresses. Close. But that still wouldn't have let hosts with extended addresses (nonzero upper bits) converse with hosts that only had v4 capability, even assuming that "very simple IP header mapping translation" in the signal path. The upgraded hosts could have sent packets to the v4-only hosts but not vice versa. Of course, if we could have all agreed on an approach like that 15 years ago, and convinced stack vendors to add the stack extensions long before they were needed, AND somehow manage to get those extensions well tested, AND updated all of the APIs and applications to be able to use extended addresses, not to mention DNS, then we might have made our transition a lot easier. That's a big IF though. My experience is that seldom-used extensions don't tend to work very well. And if we had (at least initially) embedded IPng addresses inside IPv4 packets, by any of various means (IPAE had one approach, 6to4 another, and embedding the extended bits in an IPv4 option a third) and assumed that we would start out routing IPng packets over the IPv4 Internet, we could have avoided coupling (at least) several difficult problems: (a) upgrading the network to use larger addresses, (b) renumbering the network to improve routing scalability, (c) simplifying the packet format. (but it's at least conceivable that if IPng had been designed as an extension to IPv4, to be carried at least initially over IPv4 networks, then NATs would have evolved in such a way as to facilitate use of extended addresses between participating pairs of hosts. not that this would have been obvious circa 1992. ). As it turned out, the approach chosen coupled those three problems with at least two other problems: the one of having to upgrade apps, stacks, and DNS; and the problem of forcing apps to choose between multiple source and destination addresses. And finally there's the problem of marketing/mindshare - trying to get people to understand and accept the subtle differences between IPv4 and IPv6 instead of selling them an enhanced version of IPv4. By comparison, the notion of "extended addressing" was already familiar from the PC world. It might have been much easier to sell IPv4 with extended addresses than to sell the "new" IPv6. This is, I believe, a classic case of second-system effect. Trying to solve several problems at once significantly raises the difficulty of solving any of them, particularly when adoption of the solution requires parties with vastly different interests (like carrier ISPs, enterprise networks, OS developers, and application developers) to all buy into the solution within a narrow timeframe. By contrast, decoupling the solutions might have allowed a more incremental adoption of all of the ideas, or most of them. (renumbering for routing scalability might still have been a hard sell) Of course, the devil is in the details. And the delay in adopting IPv6 further compounded the difficulties, because networks have changed a lot since 15 years ago. NATs, firewalls, intrusion detection, interception proxies, were much rarer in the early 1990s. All of these are affected by IPv6, and all of these impose further barriers to adoption of IPv6. But there's no point in kicking ourselves about this now. Keith _______________________________________________ Ietf@xxxxxxxx https://www1.ietf.org/mailman/listinfo/ietf