Re: Domain Centric Administration, RE: draft-ietf-v6ops-natpt-to-historic-00.txt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>> dream on.  in every case where I have worked with an application that
>> tried to be independent of lower layers, that has failed.  there's
>> always been some need for the application to be aware of the
>> characteristics of an underlying layer.  TCP and SCTP aren't
>> semantically equivalent because of the lack of urgent data and clean
>> close, SMTP and lpr over DECnet had to be aware of record-size
>> limitations that don't exist in TCP, etc.
>>     
>
> Keith, while I agree with your general point that applications have no choice
> but to be aware of lower layer semantics in many if not most cases, this last
> is not a good example of that. There is really no difficulty running SMTP or
> any other stream oriented protocol on top of a record-based protocol - all you
> have to do is ignore the record boundaries and make sure your buffer isn't
> larger than the maximum record size.
well, mumble.   you still get into interesting boundary cases when, for
example, an application reads a zero-length record from a record-based
source that it wants to treat as a stream, and interprets the
zero-length read as end of file.  I suppose that's an API issue rather
than a protocol issue, but the API differences are consequences of the
underlying protocol differences.  practically speaking, the code nearly
always has to have been written with awareness of the lower layer(s) its
going to be used with, before it can be expected to work reliably. 
sometimes you can isolate that awareness to a protocol-specific
adaptation layer, sometimes not.

(in running lpr over DECnet, I seem to recall that one of the problems
was that DECnet has no way to shutdown one end of a connection will
keeping the other end open, so some sort of protocol change was
necessary to make this work.)
>> a TCP peer address happens to be an IP address and port
>> number.  adding an extra naming layer and indirection for the sake of
>> layering purity simply isn't worth it, and the people who tout the
>> benefits of the extra naming layer (and there are some) tend to
>> disregard the cost and complexity of providing and maintaining that
>> layer and dealing with the cases where it fails.
>>     
>
> I don't discount the costs, however, I think that sooner or later we're going
> to be forced to implement some additional naming services. Whether or not these
> need to reuse existing DNS services, use DNS protocols as part of a new
> service, or use something other than DNS to build the service is not a question
> I'm prepared to answer right now. I know you're strongly against basing such
> things on existing DNS services but I'm not convinced this is a bad idea - the
> vast existing infrastructure is a fairly compelling point in its favor.
>   
if the only tool you have is a hammer...

basically I think if we're going to bother adding an extra layer of
naming, it had better work a hell of a lot better than DNS.   between
widespread misconfiguration and lots of lame hacks like DNS-ALG and
multifaced DNS, not to mention ambiguities in how DNS APIs interface
with things like LLMNR and proprietary name lookup services, and fairly
poor reliability, we need to be thinking about how to replace DNS rather
than how to add yet another layer that depends on it.  IMHO.

Keith


_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]