Re: The TCP and UDP checksum algorithm may soon need updating

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My first gut reaction to this was 'do the check at the application layer'.

And then I realized that was completely wrong. We have to know what error rate we should expect at the TCP level so that we can write the application in a sensible fashion.

Consider the case in which I am transfering a 60GB 4K movie over the net. Say for the sake of argument there is a 1% chance of a one bit failure. If we only have one checksum on that lot, we have a significant probability of the effort being wasted.
There is another problem with the use of MACs which is that a MAC verification failure is going to be reported (correctly) as an authentication error and not as a communication error.


I don't think we can expect the transport layer to be 100% reliable. Using MD5 makes no sense, SHA2 isn't that much slower if at all.

If we are using Merkle Damgard construction it is pretty easy to calculate both a final value and interim values at (almost) no extra cost.

So lets imagine I am transferring that 60GB. I begin calculating the SHA2 value at the start. Every 10MB or so, I throw out the current output value. If the receiver detects an error in the output value it calls back and says, 'hey, that output value was wrong, resend that chunk'.

Can get fancier if people like and minimize the amount of data that needs to be resent. 

Alternatively, we could get creative and define a Merkle Tree construction to replace Merkle-Damgard. Make it possible to parallelize digest computation.

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux