> From: Melinda Shore <mshore@xxxxxxxxx> > "Unix" ... was designed around a few abstractions, like pipes, > filedescriptors, and processes, and by the time IP was implemented we'd > pretty much settled on filedescriptors as endpoints for communications. > We could do things with them like i/o redirection, etc. ... in Unix we > shouldn't care whether an input or output stream is a terminal, a file, > or a network data stream Wow, that tickled some brain cells that have been dormant a very, very long time! My memory of this goes all the way back to what I believe was the very first mechanism added to V6 Unix to allow random IPC (i.e. between unrelated processes), which was a pipe-like mechanism produced, if vague memories serve, by Rand. This is all probably irrelevant now, but here are a few memories... One of the problems I recall we had with the Unix stream paradigm is that it was not a very good semantic match for unreliable asynchronous communication, where you had no guarantee that the data you were trying to do a 'read' on would ever arrive (e.g. things like UDP), nor for things which were effectively record-based (again, UDP). Sure, TCP could reasonably well be coerced into a stream paradigm, but just as RPC has failure modes that a local call doesn't, and therefore needs extended semantics beyond that of a vanilla local call, so too it is with a Unix stream (which, in V6, was the only I/O mechanism). Noel _______________________________________________ Ietf@xxxxxxxx https://www.ietf.org/mailman/listinfo/ietf