> To repeat: Dual stack is entirely separate. > > That's the approach that was chosen. IPv6 is an incompatible protocol > module, compared with IPv4. Independent addressing. Independent > interfacing. Independent management. > > What I described was a compatible upgrade. Very different beast. > > Yes, it is not perfectly compatible. The only way to achieve that is > to have purely syntactic differences. > > Oh. Wait a minute. hat I described -- as a first step for adoption > -- would have been a purely syntactic difference, albeit one that set > the basis for fixing the only problem that IPv6 was originally asked > to solve: bigger address space. > > Exploiting that basis would have been moved to a strictly > administrative step. Prior to exploiting it, interoperability between > IPv4 and IPv6 could have been perfect and easy. > > But it would have had the feature of being adopted as no more than a > software upgrade (and availability of syntax-translating routers.) You make it sound like a trivial step, but it's not. That "no more than a software upgrade" is fairly equivalent to the effort required to upgrade IPv4 software to IPv6 today (different APIs, different DNS records, updated apps, and difficulties for p2p apps) and I'll note that after 15 years that effort is still not complete (especially for software that isn't bundled with operating systems). And "the availability of syntax-translating routers" can't necessarily be assumed either, particularly not if there would have been a need to draw firm borders between legacy IPv4 and enhanced IPv4 enclaves. (though perhaps NATs would have evolved in that direction). >> IPv6) also must be able to originate and receive either IPv4 packets >> or the bigger IPv6 ones. Sure, the details may be somewhat different, >> but fundamentally, we have dual stack, with IPv6 nodes needing to >> support IPv4 for backwards compatability. > > And what I described was an approach that would have permitted a > "pure" IPv6 host, where interaction with an IPv4 host required a > syntax-translating relay of some sort. As you probably remember, the "syntax-translating relay" approach at the boundary of different mail enclaves (ASCII-only vs. 8-bit transparent, ASCII headers vs. UTF-8 headers) has been discussed several times and generally rejected in favor of per-message negotiation, mostly because of that difficulty of drawing a clean boundary between enclaves (and also the difficulty of having flag days during which an entire enclave upgrades). Offhand I'm not sure why IP networks would find this kind of operation any easier. > This approach does not prohibit having a host implement both formats, > but what is fundamental is that it does not require it. This is in > marked contrast with what we have now, needing a much hairier > different translating relay, independent address administration and, > really, independent operations and management. V6 is an independent > network from V4. Independent address administration, and treating v6 as a separate network from v4, do impose a barrier to adoption. I understand why IPv6 didn't go this way. But I also wonder if the costs of having IPv6 be completely separate were as well-understood as the advantages. I think too many people assumed that everyone would independently adopt IPv6 in the absence of any immediate advantage to doing so, and also that the barriers to "adopting IPv6" looked far simpler in the mid-1990s than they are today. >> And in the network, routers have to understand both the original IPv4 >> format, plus the new IPv6 format. > > Yes, anything looking at a format must understand it. If IPv4 traffic > is mixed with IPv6 traffic, then yes the routers need to understand both. > > The difference in what I described is that networks that do only one > of the formats would nonetheless be part of a unified global service. not clear. for instance, would enhanced IPv4 hosts universally be able to reach one another? or would the absence of translating relays in some cases make their connectivity spotty? > (For reference, I am being so painfully redundant in making my points, > here, because it seems to be necessary.) The problem might not be that people don't understand what you are saying, but that people don't so readily accept that the alternative would have been as simple and easy as you seem to think. >> If there was a magic "trivial" transition/upgrade strategy, we would >> have done it years ago. > > You must have been participating in different discussions than I was. > If one looks at the style of discussion now, what we see is an effort > to dismiss criticisms and alternatives, rather than counter them > seriously. People may differ on what constitutes a dismissal versus a serious counterargument. > This is what took place back then, too. Timely deadlines were > dismissed. Simplicity was dismissed. Integration was dismissed. That's a bit of an over-simplification. Simplicity was highly valued - that's why the IPv6 packet format ended up being simpler than IPv4, and part of why the idea of incorporating IPv4 address space into IPv6 was rejected. Integration wasn't dismissed out-of-hand; instead, a number of proposals were considered at length. Sure the effort ran long, but so does everything else that IETF does. It was obvious that IPng was going to require changes to host stacks, apps, DNS, and routers no matter which proposal was chosen. The dual-stack alternative looked attractive because it carried with it the notion that someday the IPv4 stack could be dropped. (People don't like the idea of old cruft hanging around forever. The most frequent complaint I hear about RFC 2047 is not that it works poorly or that the things leak but that it's ugly and it's hard to get rid of.) I think that in hindsight there was too much emphasis on producing a desirable end-state and too little emphasis on producing an attractive transition path. And I think the transition model that most people assumed was, well, naive. But in the early 1990s, before Windows even had an IP stack, when the net was much smaller and had a higher clue density, when the web was just a small-scale experiment that didn't even have the IMG tag yet (and therefore much less glitz potential), it was possible to imagine that the small, generally clueful net would see the necessity of moving to IPv6 and do it fairly quickly. By the late 1990s when IPv6 was thought to be finished, the net was too big to make that kind of transition in a short time. > The ones that I was around suffered from a classic second-system > syndrome of a) lack of pressure to delivery a timely solution, b) > feature creep, c) lack of concern for interoperability. We certainly agree that IPv6 suffered from second-system effect. > One would think that a 15-year project that was pursued to solve a > fundamental Internet limitation but has achieved such poor adoption > and use would motivate some worrying about having made some poor > decisions. A quick response that says "we talked about that" but says > no more seems a little bit facile. Looking back to try to abstract lessons seems worthwhile, though it makes no more sense to dismiss the choices that were made as obviously wrong than to dismiss the alternatives you are suggesting as obviously wrong. To the extent IPv6 could have been different in such a way as to make it more successful, the differences are subtle and the effects even moreso. As for worrying, I see little value in that now. And as for quick responses, that's the nature of anything discussed on the IETF list, or pretty much any mailing list that I've seen. Keith _______________________________________________ Ietf@xxxxxxxx https://www1.ietf.org/mailman/listinfo/ietf