On 6/15/10 11:04 AM, ned+ietf@xxxxxxxxxxxxxxxxx wrote: > And since I'm not in the best of moods I'll also answer in kind by > saying us application engineers might also be waiting for someone > with half a clue as to how to design a proper standard API to come > along and do that.
Ned,
Agreed, better progress should have been made. What impact do you see a new suite of network APIs making?
That depends on what API you're talking about, and what you're hoping to accomplish. At this point work on the core C API, with the intent of helping along IPv6 deployment by getting application developers on board is IMO a complete waste of time. Consider: Given how these things go and what needs to be done, we'd be lucky to have anything in 18 months, and 2-3 years is not beyond possibility. And that's just the start. Even if you assume implementations are done in line as the standard is developed, think how long it will be before there will be enough deployment that application developers will be able to count on the new routines enoough to take advantage of them. An aside. We got a new, corporate-standard Windows laptop for our office the other day. (Needed because everyone in the office is on Mac but a bunch of Oracle internal appls only work on Windows.) Both the box and the laptop itself were covered with Windows 7 stickers. BUt when we booted the thing, XP came up. The original Windows 7 install had been erased and replaced with the corporate standard configuration, which is currently XP despite the fact that XP is now almost 9 years old. And I doubt if Oracle is alone in doing this sort of thing. This, like it or not, is the world application developers live in. A new API may sound really keen, but it's useless until you can count on it being present on the overwhelming majority of the platforms people actually use. And the lead time there seems to best be measured in decades, not years. Now, if the goal is simply to make the world a better place for application development, then sure, coming up with a standard connectbyname interface makes all sorts of sense. But maybe this time someone might want to talk with the folks who actually make the most use of the API when designing it. Just sayin'. Another alternative would be to try and improve the routines available in various higher level programming languages, which while in general far superior to the socket level stuff are nevertheless lackiing in various ways. It is much easier to deploy a new version of Java, PHP, Perl etc. than it is to deploy stuff at the operating system level. But once again, since most of the popular languages were updated to support IPv6 some time ago, meaning most applications written in those languages just work with IPv6 with no changes, this isn't going to help IPv6 deployment much if at all. Broader support for SRV records, OTOH, is a need that might be met to some extent this way.
It is not hard to understand a view that one should avoid making NATP translations, where IPv6 should easily be able to avoid this issue.
THe operative word here is "should", Sure, the glut of addresses IPv6 provides theoretically solves this problem. But in practice, the lack of IPv6 connectivity for the overwhelming majority of users means that anyone counting on IPv6 to eliminate the need for either mutlple IPv4 addresses or NATPT is being extraordinarily foolish.
When dealing with older code that should have been changed, dual stack transitional schemes, such as ds-lite or 6to4, depend less on existing code working directly with IPv6.
You're significantly overestimating the issues with existing code, and significantly underestimating the other problematic aspects of IPv6 deployment.
Most expect port mapping agility, or manual intervention will retain functionality by moving this function to the realm of newer equipment. Access to maintenance interfaces is another area where proprietary schemes are working well. Even Debian distributions such as Ubuntu, offer pre-installed services which make remote configuration easier and safer. Having fewer maintenance interfaces exposed directly to the Internet is a good thing, since few older interfaces have adequate protection.
Here is a document that explains how the aiport router supports an API for managing port mappings: http://tools.ietf.org/html/draft-cheshire-nat-pmp-03
I'm sure this solves a problems for someone somewhere, but it doesn't address any of the issues I've raised in any meaningful way.
This approach avoids complex service and device specific structures, and dependence upon insecure, complex, and proprietary assignment protocols that ultimately depend upon users being updated and knowing when to click okay.
No doubt it does, but since none of these things you're railing against are at issue here, it's entirely irrelevant. Ned _______________________________________________ Ietf mailing list Ietf@xxxxxxxx https://www.ietf.org/mailman/listinfo/ietf