On Jul 26, 2007, at 8:47 PM, Hallam-Baker, Phillip wrote:
I don't think that I am misrepresenting the paper when I summarize
it as saying 'keep the complexity out of the network core'
I'm slogging through some old email, and choose to pick up on this.
Following Noel's rant (which is well written and highly correct), it
is not well summarized that way. For example, quoting from the paper,
"Performing a function at a low level may be more efficient, if the
function can be performed with a minimum perturbation of the
machinery already included in the low-level subsystem". So, for
example, while we generally want retransmissions to run end to end,
in an 802.11 network there is a clear benefit that can be gained at
low cost in the RTS/CTS and retransmission behaviors of that system.
My precis would be: "in deciding where functionality should be
placed, do so in the simplest, cheapest, and most reliable manner
when considered in the context of the entire network. That is usually
close to the edge."
Let's take a very specific algorithm. In the IP Internet, we do
routing - BGP, OSPF, etc ad nauseum. Routing, as anyone who has spent
much time with it will confirm, can be complex and results in large
amounts of state maintained in the core. There are alternatives to
doing one's routing in the core; consider IEEE 802.5 Source Routing
for an example that occurred (occurs?) in thankfully-limited scopes.
We could broadcast DNS requests throughout the Internet with trace-
route-and-record options and have the target system reply using the
generated source route. Or not... Sometimes, there is a clear case
for complexity in the network, and state.
Let me mention also a different consideration, related to business
and operational impact. Various kinds of malware wander around the
network. One can often identify them by the way that they find new
targets to attack - they probe for them using ARP scans, address
scans, and port scans. We have some fairly simple approaches to using
this against them, such as configuring a tunnel to a honeypot on some
subset of the addresses on each LAN in our network (a so-called "grey
net"), or announcing the address or domain name of our honeypot in a
web page that we expect to be harvested. Honeypots, null routes
announced in BGP, remediation networks, and grey networks are all
examples of intelligence in the network that is *not* in the laptop
it is protecting.
The end to end arguments that I am familiar with argue not for knee-
jerk design-by-rote, but for the use of the mind. One wants to design
systems that are relatively simple both to understand and to
maintain, and to isolate for diagnosis. The arguments do not say that
leaving all intelligence and functionality in the end system is the
one true religion; they observe, however, that the trade-offs in the
general case do lead one in that direction as a first intuition.
_______________________________________________
Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf