I was trying to be polite. Folk who have been peddling a technology which meets a need faced by essentially all Internet users that is being utilized by essentially none of them after 30+ years should be willing to take the win and move on.
The people who are choosing not to use multicast are serious engineers who are fully aware of the existence of multicast. They are doing things differently for a reason and being dismissive about their engineering choices as clearly unfounded and incorrect is not the way to persuade them.
On Thu, Jan 5, 2023 at 7:56 PM Dino Farinacci <farinacci@xxxxxxxxx> wrote:
> Wall street is likely to be in a unique class here.
Definitely a domain specific case.
> I don't think we need to consider multicast a failure if it doesn't end up being used because the functionality is being provided at a higher level in the stack.
Yes, but the worse place to do packet replication is at the appliation source. You simply kill the lower speed access links that leave the source. Which means you can only put content sources in centralized places with a lot of resources.
Seems like a rather narrow minded view of the issues. Multicast always struck me as a layering violation with a rather indistinct use model and fuzzy set of semantics. I am not surprised to see folk choosing to do it differently.
Doing packet duplication at the transport layer means that reliability can be provided in the network rather than at the packet origin.
> At some point we should do a QUIC like transport for Multicast/Packet Duplication.
Ditto above comment.
Again, seems like you are better at dismissing different ideas than explaining your position.
From my point of view as an application protocol developer, multicast is essentially useless because I have to do my own work to provide a reliable transport and that in turn means I can't make use of WebRTC libraries and the like.
I was rather annoyed to be watching the Times Square new years eve celebrations on You Tube TV with a 15 second delay. That could have been avoided if multicast had been a part of the distribution technology. But I can understand why it wasn't.
The only way I can see multicast being useful in the context of Internet broadcast events is as one component in a hybrid scheme which looks like QUIC to the receiving endpoint application with a multicast stream from the data source being supplemented by a separate branching control channel providing authentication and reliability.
Such a scheme would have to address security differently from the handling in QUIC because it is the nature of AEAD/MAC schemes that anyone who can authenticate the data can inject fake data. Thus it is necessary to either use public key or to provide each recipient with their own authentication feed.
But no, go on. Perhaps we should give it another 30 years. Lets just keep doing what we are doing and maybe things will change.