Re: The TCP and UDP checksum algorithm may soon need updating

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In article <3ac60a21-4aee-d742-bedc-5be3a4e65471@xxxxxxxx>,
Michael Thomas  <mike@xxxxxxxx> wrote:
>So the long and short of this entire issue seems to be is, is the 
>uncaught error rate serious enough that warrant rethinking weak 
>transport and frankly L2 layer error detection? ...

Having read the papers that Craig referenced, that's my interpretation.

One of them is about a big physics application which sends multiple
terabytes of data over the net using what looks like a version of
FTP that transfers several files at once.  They send the data as a lot
of of 4 gig files. When they started verifying file checksums, they
found about 20% of the received files were corrrupted in transit.

In that application they resend the corrupt files and they obviously
need make the files smaller. But retransmitting a file at a time seems
a lot less efficient than improving the checksums and using the
existing TCP packet level retransmission.

-- 
Regards,
John Levine, johnl@xxxxxxxxx, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly




[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux