>I want to be able to give you a URL and have you resolve it. That
>only works if we speak the same transport protocol.
>
>I want people to be able to reference HTTP and get interoperability,
>not to have to write a profile of http.
Sam,
there are several ways to understand your exact concern.
Let me keep considering the network ecology. This is the real issue
we have with interoperability , i.e. the inner structure.
- either you consider the Internet as Harald Alvestrand considers it
in RFC 3935: something the IETF leaders influence the building along
their values. This vision is OK with me as long as this Internet is
one system among others. Ex. TCP/IP vs. OSI. You can decide to
constrain it to force the inner interoperability its unique
governance wants. As does Harald with languages and you would with
HTTP. Every time you give an URL you are to reach the same site. As
also the IGF still considers the things: every time you give an URL
you hope you reach the same site.
- or you consider the digital system as it is: a living global mess
with many technologies, bugs, conflicts, etc., with its own ecology
(the way it usually reacts to something). Every time you give an URL
you do not know if you will reach the same site. So you organise
yourself to preserve and develop interoperability and increase your
chances, depending on your contexts.
Obviously the cybernetics of a closed top system (as in the first
vision above, the vision of the Internet until now) is simpler and
easier. You can delegate compatibility, security, compliance, etc. to
lower layers. You are the root. This is the Plato's paradigm: the
death, the living, and the pilot (kubernetes). You are the captain of
your Internet ship.
The extended cybernetics of Plato's extended paradigm is far more
complex (and more intellectually rewarding): there are pilots a
plenty, each with his own ship. So the problem is not really internal
interoperability to a single network of networks governance. It is
the mutual interoperability within the intergovernance of billions of
metanetworks on the single digital ecosystem messy supernetwork of
the thousands of infrastructural networks. The networks' network's
networks. There are three possibilities:
- either all the ships make a fleet you can lead in an autocratic
"Admiralty Way"
- either they are a convoy and you try to influence them to sail in a
similar (more or less democratic) way without too many collisions
- or they are a shipping with captains minding their own business you
can inform of the difficulties ahead and of your own course.
It means that in the first case you decide of the network and for
others: you impose your own interoperability. In the second case you
discover the ecosystem and your companions law and you suggest what
we can do to manage interoperability. In the third case you want to
make a model of the ecosystem and of its risks and advantages, and
you carry your own development concerting with other about
interoperability, building it on your side if they do not mind and
you need it, fighting others when they want to impose limits hurting
you. This is the same as local trade, international guilds, and WTO
for commerce.
To come back to RFC 3066 showcase of the issue. There is no
opposition between Harlad's vision and mine. They are complementary,
each at its own layer. Globalization of the ASCII Internet is OK,
because Globalization is localised: each language is to be globalized
the same as English is for the ASCII Internet. So, when the WSIS
decided that the Internet is an USG issue, it localised the Internet
and there is no more conflict [oddily, you approved RFC 3066 a
couple of hours later on]. The US/Internet practice can perfectly
impose its own local vision of interoperability. And at the same
time, other externets (external networks look alike) can have their
own vision (my MDRS for example, or China for the DNS). The problem
is the inter-externet interoperability. In the mono-Internet current
IETF vision, interoperability is defined by RFC 2026 as an internal
interoperability. All my dispute is about external interoperability.
As simple way to address this problem is to accept the concept of a
Multi-Internet, making the external mono-Internet interoperability,
an internal interoperability issue of the Multi-Internet.
Once you have done this (what is perfectly in line with the way to
resolve a problem of Brian's RFC 1958) you only consider gradations
in interoperability and you can document a Bayesian interoperability.
There you will find solutions for people to reach the URL you wanted
them to want to access (which may not be the one you gave, but the
one which is the one they need to understand what you want to
convey). You have no more a root, but you worked out a garden out of
the forest of the digital ecosystem. This is in the DNS area what
ICANN asked the IETF to experiment in its ICP-3 document. IETF was
not interested. We carried this experimentation. It shown that the
current Internet technology can easily support it and that it is
probably the only way for it to develop, uncoupling the developments
of its various externets and layers.
All the best.
jfc
_______________________________________________ Ietf@xxxxxxxx https://www1.ietf.org/mailman/listinfo/ietf