If multicast worked it would be the ideal protocol for NNTP. As far as an NNTP client is concerned, the protocol could be functioning over multicast. In practice of course multicast would not be a useful protocol for NNTP because you would at a minimum need a separate multicast channel per newsgroup since not every news server is interested in every group. So under the covers NNTP is doing something more intelligent at the application layer that ensures that packets only travel over the wire to devices that are interested in them. This proposal uses multicast and thus every device will see a lot of traffic it has no interest in or use for. That is a bad application architecture in my view. It is only tolerable to the extent that in a LAN you can use Ethernet broadcast in place of multicast. That is why I am somewhat skeptical of your claim that the protocol is widely deployed. There is no way that a multicast packet can rout in or out of this house as far as I am aware and I find it unlikely that would change. There is no way that multicast would traverse a properly configured firewall either. I do not understand quite why there is this obsession with the idea that network configuration should be performed at the peer level. It has given us a series of protocols that network managers spend their time working to disable because they are unacceptably chatty and horrendously slow. I do not want my coffee pot participating in my network as a peer. I want it to have the minimum network access to support its function and absolutely nothing more. Both my Mac and my Windows boxes take a noticeable length of time to work out what is on the local network here. Even the wired endpoints occasionally lose synchronization with each other. DECNet solved this problem twenty years ago, and therefore the patents must have expired. Instead of every machine in the network trying to manage it you nominate a small number of machines as members of the cluster and give them a vote. Only the three to seven members of the cluster need to spend any time chattering and keeping their network databases in sync. If any voting member of the quorum fails, its functions are taken over by the remaining ones. There is a network quorum, a voting member of the quorum only updates network status information if it has a recent vote from a quorum of voting members. This ensures that the network database is always consistent even if the network cable is cut. (Yes, they didn't degrade as well as they should in Decnet, but that was an implementation mistake, not an architectural constraint). It is worth pointing out some features that this architecture supported that are not available on the Internet in anything like the same form. You could write a server application in a couple of hours that was fully ACID and fault tolerant. The system would stay running even if a machine died. And when the other machine came back it could resync transparently. I do not think multicast is at all important to this proposal. It is a case of using a glass hammer to drive home a nail. On Mon, Nov 30, 2009 at 7:56 PM, Stuart Cheshire <cheshire@xxxxxxxxx> wrote: > On 30 Nov, 2009, at 15:23, Phillip Hallam-Baker wrote: > >> 90% of this proposal is equally relevant to the enterprise case. >> But the multicast part is not. > > The document is called "Multicast DNS". Which parts of the document do you > think do *not* relate to multicast? > > Stuart Cheshire <cheshire@xxxxxxxxx> > * Wizard Without Portfolio, Apple Inc. > * Internet Architecture Board > * www.stuartcheshire.org > > -- -- New Website: http://hallambaker.com/ View Quantum of Stupid podcasts, Tuesday and Thursday each week, http://quantumofstupid.com/ _______________________________________________ Ietf mailing list Ietf@xxxxxxxx https://www.ietf.org/mailman/listinfo/ietf