Howdy,
On Wed, Jul 15, 2015 at 9:18 AM, Patrik Fältström <paf@xxxxxxxxxx> wrote:
There are many shades of gray ;-)
This point seems to have consensus :-).
Well, I hope that adding names to the special names registry will result in the names be on the "forbidden" list of potential new round of TLDs ICANN might launch. All up to the result of the PDP run by ICANN of course.
From this point of view the special names registry is actually a registry of "labels forbidden to the DNS in a specific slot". But my view of the reasoning is that for test, invalid, and example "special processing" means "no processing, this isn't real". RFC 6762 wasn't "no processing" but instead "no processing in global DNS, process locally using mDNS instead." That confirmed that the IETF would be willing to register labels forbidden to the DNS in a specific slot because it was otherwise resolved, rather than simply unresolvable. And now we have the next such request, which once again relies on an installed based to claim necessity.
From an architectural perspective (but still wearing my hat as an individual), this method for partitioning the namespace has a very poor long-term characteristics. If we permitted it generally, we would be sanctioning a system in which there is a single root + some local knowledge required for name resolution. That local knowledge will be at some unknown state for the vast majority of devices and implementations for a long time (predict for me, if you like, the date the last query for .onion will hit the root, and I'll buy you a donut if it occurs within a year of your guess) and if the local knowledge required expands over time, essentially forever. That's bad, and pretty much needless, as there are lots of other ways to partition the namespace. pseudo-TLDs are not required; they look convenient because they hide the costs.
I know some people say that opens the door for someone to request strings in IETF and create a denial of service attack against the "approval process" ICANN runs, but, I trust IETF to do the right thing.
I think this is the wrong analysis of the risk. If someone seeing the acceptance for .local and .onion decides they want some other resolution mechanism and creates .npr for their novel process for resolution, it will work for those clients updated with local knowledge. When the journalistic outfit "NPR" comes calling at ICANN and gets a name in the root, that community may not even know its going on. But afterwards,
we will have local knowledge conflict with the root with ordering of resolution steps deciding what happens. That's fragile for everyone, not least the people now running the gTLD .npr
we will have local knowledge conflict with the root with ordering of resolution steps deciding what happens. That's fragile for everyone, not least the people now running the gTLD .npr
We saw this with squatting in the url scheme space (surely you remember the fun with mms?). Either the process of registering local partitions of the global namespace has to be so easy that they get registered very early, or we have to avoid this style and establish other methods of signalling alternate resolution in application slots.
The latter at least *might* scale. This does not.
regards,
Ted
Ok, now the question is whether .ONION has passed the bar regarding for example deployment etc, and I think it has. Yes, more limited deployment than .LOCAL but definitely deployment, and, if I understand things correctly, multiple independent implementations and deployments.
>> What IETF might need is a stopping function for approval of
>> usage of the domain name namespace outside of the DNS.
>
> I think the intent was that the specifications and
> considerations in RFC 6761 established precisely such a stopping rule. If this "onion." proposal (and/or the other special-use names proposals I've seen floating around) demonstrate to the community that 6761 is not adequate, then perhaps we should put those proposals for new special-use names on hold and go back and review 6761 to see if the evaluation criteria need
> improvement.
Agree.
> Personal opinion (and maybe a hint about something that may need examination about the way 6761 is being interpreted): If someone came to the IETF with a new piece of protocol that needed a
> reserved domain name and asked for a root-level name, I assume they would get a lot of pushback suggesting that they use a
> ARPA. subtree or some commercially-available subdomain instead.
Note what I wrote above, that IETF special name registry is NOT for things that goes as a TLD in the root zone. The contrary. So IETF can not this way allocate a TLD in the root zone, which I interpret your text above implying.
If you here talk about allocation of a branch of the namespace, the question is whether bullet 1 in section 2 is strong enough:
1. Users: human users are expected to recognize .onion names as
having different security properties, and also being only
available through software that is aware of onion addresses.
I guess this goes back to discussions similar to whether one should have URI:HTTP:// or just HTTP:// and we all remember those discussions...
> I'd hope the IETF would listen carefully to arguments about why a TLD was really needed, but, assuming we still believe in the distributed hierarchy model that underlies the DNS, I'd assume that the default would be "no" and a persuasive case for a
> root-level name would be very hard to make.
Fair.
> The difference
> between that scenario and some of the special names proposals that now seem to be floating around is that, rather than
> engaging with the IETF when the protocols were being designed, the community involved decided to pick a TLD-style name, squat on it, deploy, and then come to the IETF looking for help in obtaining formal rights to the previously-squatted name. It does not seem to be to be in the best interests of either IETF or the Internet for us to encourage that type of sequence of events.
True, but if we look at the chat protocols, IETF could not agree on which one of three different protocols should move forward. Then XMPP came and, well, was developed elsewhere and basically "won".
We now have HTTP/2 which some people do believe could have been done multiple years ago based on BEEP, which is a very good protocol at least academically/theoretically...but how is it going with it?
Anyway...
I am just saying that having ideas actually be cooked inside IETF is not easy...and have not been easy for many years. So just the fact an idea is brought to the IETF when being deployed can not by itself be a reasoning for saying no.
That said, I completely understand what you write (I hope!), and you are right that in there are questions.
Will we be able to find just objective rules for saying yes or no? I don't think so...unfortunately.
Patrik