At 12:32 27/06/2005, John C Klensin wrote:
> (The IETF is here for engineering the protocols, after all).
Allison, I'm generally supportive of that view. I've actually
been responsible for writing some of those IANA considerations
sections that require standards-track actions. In the light of
this situation and a few recent other ones, I have modified my
historical view somewhat and would encourage the IESG to do some
similar rethinking.
Dear John,
I am impressed by this rethinking. I can provide more elements here. We
created AFRAC (http://afrac.org) to focus on CRC issues (common reference
center, i.e. granular IANA distribution/extension) at national level.
From what I gather IANA (NIC) was the "second" networking main concept
historically introduced. Larry Roberts built the transport network in using
Robert Khan's systems and Doug Engelbart proposed a repository, building
the infrastructure of the relational layer of the network where he stored
Steve Crocker's documents and their parameters. Then later on Louis Pouzin
proposed catenet. All was the cimented by TCP/IP and supported later on by
the DNS, which is a formidable example of a IANA registry granular
extension and co-management. We are here therefore at the very core.
The "transport" layers were in line with other works of the time and have
been worked on a lot since then. I do not see similar work on the NIC
layers other than naming (but no new concept since the RFC 920 consensus)
and Doug Engelbart providing additional tools in mid-80s. I was interested
using them for the documentation/repository I started in 1978 for the
International Network, as we were both in the same group and woking on new
machines which could have welcomed his solutions in that context.
Doug accompanied them with a philosophy named "augmentation": the
collective IQ of a "bootstrap" (http://bootstrap.org) is augmented by its
networking. The Internet bootstrap was the NIC/IANA managerial group, ie
the IESG now. This opposed our observation of the real international usage,
which could be named "extension": human's reach and relations are extended
by the network. This extension of productivity obeys a network value law
(n.log(n)-µ.n^2) which shows how the network may also represent a less in
productivty if architectural filtering is too high (this is the immediate
risk with spam or shown by IESG in the HBH code point denial).
These two visions resulted from history. Doug started a documentation
center in 1970 (?) to accompany the development of a new project (ARPANET)
which grown by "catenisation" ("the network of the networks"). I started a
documentation center to prevent the balkanisation of an existing network
(Tymnet International) having to be partitionned (acknowldegment of
national monopolies rights to "stand-alone systems", or "externets" as we
now name these "internal 'external networks' look alikes"). I would say
"the networks in the network".
What is fun is that my problem at that time was ... Dr. Larry Roberts. He
had created Telenet and we were in competiton, under FCC license, so I
could not legally ally his international team. This made me to create the
Intlnet ad-hoc structure in 1978 to carry that common documentation work,
supporting Telenet and Uninet as well as every other operator (cf.
http://intlnet.org/eintl.htm - "history" part). As Telenet teamed with BBN
(hence, I suppose Bob Hinden was already involved?)
Today we face the same problem: who is the standardiser? Is it the designer
(IESG)? the developper (Dr. Roberts) or the user (everyone can chose)? We
face everywhere that question. As for the Tymnet network 28 years ago, this
is the demand for a network granularity where the current Internet defaults
to "one": one numbering scheme, one namespace, one IANA, one ASCII script,
one English language, one IETF/IAB/IESG, one ICANN, one legacy, one State
sponsor (RFC 3869), one experience, etc.
This was not the case in the world network we known with Dr. Roberts. So, I
fully understand and support the nature of his request and the obligation
the community has to positively respond. But I also understand the
principle of the IESG "bootstrap" reponse. We have to find a a way to
transform all these "one" into parameter able to match the values chosen by
the users. This means customising the Internet. Tymnet already had
responses(reliable technology, classes and groups) that Internet has not
yet. When in the mid-80s OSI was deployed it internationnaly deployed over
Tymnet and was a juxtaposition of network (X.75). So the response was
simple, rigid, manual (interconnect agreements), limited (CUGs) but built-in.
A problem is that we are very late. This is not something we can discuss
over the next ten years. This is something which will be definitly settled
by December at the WSIS. We must accept that if the Internet is not broken,
it is already split (DNS), divided (funding, policy, access to content),
threatened (internationalisation vs. multinationalisation), pulverised
(PADs: private alias directory, as many name spaces as users). I think no
one expect much of the Tunis meeting (who knows, they may be enlighted?)
more than a confirmation of the granularity/subsidiarity principles in
governance and usage. This is however revolutionnary for the Internet: it
started as a centralised experimentation, it continued as an IETF
standardisation documented decentralisation. It will become a user defined
distrubution. This means a user centric utilisation architecture is
self-imposing over a network centric (decentalised) bootstrap conducted
system architecture. The dispute over HBH, langtags, mail-ID, NAT, IPv6
etc. etc. will be peaceful stories when compared to the resulting
transition if it is made by force rather than IETF documented and accompanied.
There may be other solutions, but I know and accept that the Internet can
address this in being urgently partitionned in a concerted way, preserving
its end to end interoperability and extending it to person to person
interintelligility, in doing the same as Robert Tréhin in 1977 with the
technical support of Joe Rinde. This is why we must engage into a concerted
well thought partitionning strategy. IMHO the easiest approach is the
distribution of the IANA. This is the one we observed as successful in the
late 70s and early 80s, then in a different way (printed standards, mutual
agreements) with the rigid OSI. Centralised NIC, decentralised OSI: now we
have to come back to a distributed approach. But we have to remember that
Tymnet and OSI were reliable technologies, while TCP/IP is an unreliable
one. IMHO this gives more possibilities and less stability right now. It
will lead to an hybrid vision of the intergovernance (co-existance of
trusted and non trusted spaces of exchanges) and possibly to an hybdrid
technology further on (for example supporting NASA/ESA type of protocols,
including FEC).
We have the experience of ICP-3 where ICANN called in vain IETF to
experiment (we did and learned: AFRAC and NICSO result from it). Then the
experience of the root server distribution demanded by the WSIS. It was
only addressed by the distribution of the content: this does not change
anything to the system's global vulnerability which is in content - as the
recent AFNIC/IANA/Versign incident shows it - nor to the intelligence
issues. Then we have the experience of the langtags, where the IANA is
perceived by those opposing the current twice failed proposition, as a way
to impose a commercial grid of languages and a competition rather than a
cooperation with ISO spheres. Now, we have the HBH case.
If one analyses carefully all these issues we see that:
- stability of the network calls for a unique or concerted content in
registries
- when the content of the registries is different its versions must be
signed. We must switch from an IETF filtering to an advised and
possibly multi-labelled registration.
- surety of the networks calls for speed in updates - far better than the
current IANA 3 to 6 months. I am sure there are more than 30 versions of
different dates IANA root files currently in use on the Internet.
Collectors must give back the authority of content to authors.
- security calls on correlative cross checking procedures and tools. This
starts by a standardisation of the registries and of their tools, formats,
methods of access, update, retrieval, etc. Scalability demands that this is
studied in relation with ISO 11179 work, IETF should enrich and simlify (as
LDAP vs. X.500).
- the resulting necessary intergovernance processes must be studied,
documented and technically supported.
- more generally, we need to switch from a philosophy of global trust to a
philosophy of global distrust with local spaces of trust, exchanges and
services.
All this can be more easily put together at CRC level than anywhere else,
but it must eventually consistently expand everywhere and be the basis of
the whole standardisation docntrine and model. We cannot continue to trust
the breakers and then try to catch them. We must distrust everyone who has
not proven to be a friend. Do you invite home, lend your wallet and leave
your kids to everyone you meet in the street? e-life is no different from
real life.
etc.
This mail will probably be disregarded by most, opposed as a troll by some.
Nevertheless, this is what is to happen, this is the way we should work on
it, this what is happening. Obviously, this is a change in the meaning of
"Internet" into "International Network" where "International" is an image
of the granularity and of the diversity in shape, governance, requirements,
modes, protocols, etc. of the world digital ecosystem digital management
and convergence.
I am not sure the IETF is ready for it: in addition to the architectural
expansion, it means for example the end of the international, multilingual,
and vernacular layer violation permitted by the ASCII default, and
therefore a probable IETF nexus shift at the end of the day. But this is
going to happen, with or without the IETF. I hope it is with it.
jfc
_______________________________________________
Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf