Re: The TCP and UDP checksum algorithm may soon need updating

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 6/9/20 6:08 PM, John Levine wrote:
In article <3ac60a21-4aee-d742-bedc-5be3a4e65471@xxxxxxxx>,
Michael Thomas  <mike@xxxxxxxx> wrote:
So the long and short of this entire issue seems to be is, is the
uncaught error rate serious enough that warrant rethinking weak
transport and frankly L2 layer error detection? ...
Having read the papers that Craig referenced, that's my interpretation.

One of them is about a big physics application which sends multiple
terabytes of data over the net using what looks like a version of
FTP that transfers several files at once.  They send the data as a lot
of of 4 gig files. When they started verifying file checksums, they
found about 20% of the received files were corrrupted in transit.

In that application they resend the corrupt files and they obviously
need make the files smaller. But retransmitting a file at a time seems
a lot less efficient than improving the checksums and using the
existing TCP packet level retransmission.

Which pretty much makes a case for something like transport mode ipsec, right? it's a hell of a lot cheaper to drop a packet than retransmit an entire file, right?

I guess this gets chalked up to false efficiencies. Thankfully we're a point where a lot of these tradeoffs aren't quite as hand-wringing.

Mike





[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux