Re: e2e

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I guess I'll jump in as well. I was reading some of the related papers recently for a different reason including the ones on active networks (thank gods they are history) and whether that concept is in line with the e2e philosophy.

In any event, exploring one of your examples with the concepts in the paper in mind (perhaps I am using a verbatim application of the concepts) that the network may filter some (and that being the keyword) malware or suspicious traffic based on certain parameters is fine; but the point is that in the end, an application may have to determine what it accepts as legitimate traffic based on its own criteria. Email junk filtering comes to mind as an example. Trying to map that to one of the statements from the paper: "For the data communication system to go out of its way to be an extraordinary filter does not reduce the burden on the application program to filter as well." In some sense it does reduce it, i.e., for most apps or users, the functionality provided by the network may be sufficient, but we get the idea. Entities in the data communication system :), say the mail servers, do some filtering, but different email applications utilize different techniques to get the job done and some are adaptive based on user input etc. I know there are efforts to do more and more in the mail servers, but the email applications are also getting more sophisticated over time.

Your point is well taken of course that there are no cut and dried rules.

For instance, I am not fully in tune with the arguments on security (secure data encapsulation to be more precise) in the paper. The paper says that "data will be in the clear and thus vulnerable as it passes into the target node and is fanned out to the target application. Third, the authenticity of the message must still be checked by the application."

That goes to the extent of saying that end-node to end-node protection is not sufficient and that the data must really protected all the way at the application layer. I might in other contexts make the argument for security properties that do belong in the application layer (non repudiation comes to mind for instance), but there are security properties that we'd get through network layer security that we might not really get through application layer security. I am also not sure I understand the thing about the authenticity of the message having to be checked by the application (do they mean that the data is vulnerable and that's why?). I am also curious if some of this has to do with multi-user systems being popular back then.

Now that sounds like a rant ;).

regards,
Lakshminath

On 8/14/2007 2:21 PM, Fred Baker wrote:
On Jul 26, 2007, at 8:47 PM, Hallam-Baker, Phillip wrote:
I don't think that I am misrepresenting the paper when I summarize it as saying 'keep the complexity out of the network core'

I'm slogging through some old email, and choose to pick up on this.

Following Noel's rant (which is well written and highly correct), it is not well summarized that way. For example, quoting from the paper, "Performing a function at a low level may be more efficient, if the function can be performed with a minimum perturbation of the machinery already included in the low-level subsystem". So, for example, while we generally want retransmissions to run end to end, in an 802.11 network there is a clear benefit that can be gained at low cost in the RTS/CTS and retransmission behaviors of that system.

My precis would be: "in deciding where functionality should be placed, do so in the simplest, cheapest, and most reliable manner when considered in the context of the entire network. That is usually close to the edge."

Let's take a very specific algorithm. In the IP Internet, we do routing - BGP, OSPF, etc ad nauseum. Routing, as anyone who has spent much time with it will confirm, can be complex and results in large amounts of state maintained in the core. There are alternatives to doing one's routing in the core; consider IEEE 802.5 Source Routing for an example that occurred (occurs?) in thankfully-limited scopes. We could broadcast DNS requests throughout the Internet with trace-route-and-record options and have the target system reply using the generated source route. Or not... Sometimes, there is a clear case for complexity in the network, and state.

Let me mention also a different consideration, related to business and operational impact. Various kinds of malware wander around the network. One can often identify them by the way that they find new targets to attack - they probe for them using ARP scans, address scans, and port scans. We have some fairly simple approaches to using this against them, such as configuring a tunnel to a honeypot on some subset of the addresses on each LAN in our network (a so-called "grey net"), or announcing the address or domain name of our honeypot in a web page that we expect to be harvested. Honeypots, null routes announced in BGP, remediation networks, and grey networks are all examples of intelligence in the network that is *not* in the laptop it is protecting.

The end to end arguments that I am familiar with argue not for knee-jerk design-by-rote, but for the use of the mind. One wants to design systems that are relatively simple both to understand and to maintain, and to isolate for diagnosis. The arguments do not say that leaving all intelligence and functionality in the end system is the one true religion; they observe, however, that the trade-offs in the general case do lead one in that direction as a first intuition.

_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf


_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]