Re: [saag] Ten years after Snowden (2013 - 2023), is IETF keeping its promises?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> I was trying to be polite. Folk who have been peddling a technology which meets a need faced by essentially all Internet users that is being utilized by essentially none of them after 30+ years should be willing to take the win and move on.

My intent wasn't to be dismissive. I am not sure why you thought that.

> The people who are choosing not to use multicast are serious engineers who are fully aware of the existence of multicast. They are doing things differently for a reason and being dismissive about their engineering choices as clearly unfounded and incorrect is not the way to persuade them.

Yes, and I was just stating a conceptual optimization. It has been clear that people are *willing* to pay the cost for head-end replication at the app source. They do that because they have more control. Its understandable.

> Ditto above comment.
> 
> Again, seems like you are better at dismissing different ideas than explaining your position.

I did explain my position. It has proven work that distributed replication (in routers) scales better. But it comes at a different cost than doing head-end-replication. Engineers decide on that tradeoff.

> From my point of view as an application protocol developer, multicast is essentially useless because I have to do my own work to provide a reliable transport and that in turn means I can't make use of WebRTC libraries and the like.

For UDP applications, its simpler for the app to send 1 packet on a UDP socket when they want that packet to get to multiple recipients.

> I was rather annoyed to be watching the Times Square new years eve celebrations on You Tube TV with a 15 second delay. That could have been avoided if multicast had been a part of the distribution technology. But I can understand why it wasn't.

Yes, right.

> The only way I can see multicast being useful in the context of Internet broadcast events is as one component in a hybrid scheme which looks like QUIC to the receiving endpoint application with a multicast stream from the data source being supplemented by a separate branching control channel providing authentication and reliability.

Multicast being useful is getting off topic from this thread. Unless you believe to remove source addresses from a packet and not satify the multicast requirement, than that is a side to argue.

> Such a scheme would have to address security differently from the handling in QUIC because it is the nature of AEAD/MAC schemes that anyone who can authenticate the data can inject fake data. Thus it is necessary to either use public key or to provide each recipient with their own authentication feed.

It is well-known how complex multicast key management is and KEK doesn't scale with the number of receivers. But that state (for multiple receipients) has to be stored somewhere, at one layer or the other. So an engineer needs to make their own tradeoffs.

> But no, go on. Perhaps we should give it another 30 years. Lets just keep doing what we are doing and maybe things will change.

This is not a binary success or fail argument. And the argument should be taken offline since its off topic. If you want to discuss privately, would love to. Thanks.

Dino





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux