John, JCK> The design of Punycode was for machine-to-machine communication, as JCK> you point out. It was never really intended or expected to "leak" JCK> into use environments. ... JCK> Indeed, it was chosen, in preference to some other options, on the JCK> grounds that JCK> * it could be deployed quickly, JCK> * applications could (and would) be speedily updated to ... JCK> * when it did leak, users would find that acceptable ... The ascii-encoding paradigm for incremental adoption is "expected" to have four usage characteristics: 1. It can be deployed usefully on an incremental basis, where "incremental" means that it becomes useful when very few people have adopted it -- for example, useful for the first two users that exchange it -- and is increasingly useful as more people adopt it. 2. It is transparent to the transfer infrastructure 3. When it shows up at an un-modified application, such as a mail user agent, it does not break the system. 4. Data entry can be done in unmodified application, such as an email Reply command, albeit typing the string will probably not be an experience to savor. Hence, #1 #2 mean it can be deployed quickly because it does not require everyone to adopt it, before it is useful. This is in marked contrast with any approach that first requires modifying the infrastructure. #2 and #3 clearly imply that end-users will see these strings. So, leakage is entirely expected. It is not desired, but it is expected. The concern is not whether such leakage occurs or whether the string is human-friendly. The concern is whether anything breaks, when it does show up, as we know it will. And, indeed, Punycode does not break legacy software. So it satisfies the requirements and expectations of this incremental adoption paradigm, just fine. JCK> The reality so far is different. The IDN browser world has JCK> turned into one of conflicting plug-ins, apparently none of JCK> which are fully satisfactory. Since I am far less than a diligent reader, I have not seen the discussion of this, but I certainly would like to understand it better. Any citations to material on this would be appreciated. My sense of most adoption processes, for any interesting bit of application technology, is that it has an ugly startup. For example, the first interoperability test of TCP did not initially work. Folks had read the spec differently for the checksum algorithm. (And the way this was resolved was to look for the first pair of hosts that could communicate and then retrofit the specification with whatever algorithm they were using.) Another downside of not being a diligent reader is that I do not recall seeing discussion of interoperability testing for IDNA. Surely it has been done? Should more events be scheduled? JCK> For other applications, the JCK> conversion/ upgrade rate has been low and most estimates seem to JCK> point to its being at least 18 more months before we see JCK> significant progress in applications that have really wide JCK> deployment Let's ignore that the RFCs were issued only in March of this year. Let's use the IESG Last Call as the "release" date. That means May, 2002. Starting from that milestone, and accepting your estimate of 18 months, it means that IDNA will be significantly useful within 3 years of completion. Offhand, I'd say that matches the experience with MIME, and matches the general expectations for adoption of a scheme that provides incremental benefit. This is quite a bit shorter than the adoption times for an approach that requires infrastructure change. (Even now, does anyone really rely on email Delivery Service Notices working over the public Internet? It had its last call in 1995.) So, your assessment of the likely time it will take for IDNA to be significantly useful strikes me as very good news, indeed. JCK> the IETF community should probably understand what it looks like JCK> and what the issues are in converting to and from it, including JCK> the fact that, for expression of "names", nameprep is a lossy JCK> conversion. And that is precisely the "eat our own dogfood" JCK> point. More important, we need to think about --and probably JCK> experience-- that point as we consider approaches for email JCK> local parts, which more often contain names about which people JCK> are _really_ sensitive. The question of lossiness does not have to do with the incremental adoption paradigm. It has to do with problems in the details of the IDNA mapping algorithm. My own suspicion has been that it tries to do too much. That is certainly my sense of the IMAA scheme. It needs to do less. It needs to be more literal. >From a design standpoint, this is a matter of tuning the details, rather than rejecting the paradigm. d/ -- Dave Crocker <dcrocker-at-brandenburg-dot-com> Brandenburg InternetWorking <www.brandenburg.com> Sunnyvale, CA USA <tel:+1.408.246.8253>