Re: The TCP and UDP checksum algorithm may soon need updating

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, Jun 5, 2020 at 12:39 PM Nico Williams <nico@xxxxxxxxxxxxxxxxx> wrote:
On Fri, Jun 05, 2020 at 12:10:25PM -0400, Phillip Hallam-Baker wrote:
> On Fri, Jun 5, 2020 at 12:01 AM Joseph Touch <touch@xxxxxxxxxxxxxx> wrote:

> > Before we solve a problem in theory rather than in practice.
>
> Has anyone been looking? The security area has always been interested in

No one looks for this.

Does anyone have infrastructure that works so well and so dependably that they can afford the time to perform a full post mortem on every error?

How many people do post mortems versus turning it off and on again? There are so many bugs in the application layer that hardware errors are rarely considered. 

Case in point, I have close to $10K worth of consumer level Internet gear in my house. Every so often the network will go into some weird race condition that is clearly a consequence of buffer bloat or some piece of connected hardware spamming the network with garbage. The manufacturers provide precisely zero support for debugging. None, nil, nadda.

The reason I try to use consumer grade equipment is that I want to understand the system from the consumer's perspective and right now I don't have any respect for any of the hardware suppliers involved. That said, 24 port PoE internet switches are probably not consumer grade. 

 
> theoretical attacks. They are by far the best kind.

This is a real problem, not theoretical.

Of course it is real. The Internet is a sufficiently large network that everything that can happen will happen.

 
Now, we've talked about how some applications are or can easily be
impervious to this.  If you're transferring static data, this is not a
problem because you just use crypto that detects TCP checksum failures
and then make the application protocol recover.  But some applications
are more difficult to address than others.

I wonder how much TCP offload HW will complicate the upgrade path here.

The argument I was making was actually somewhat different to the one Joe responded to.

My point is actually that data transfers are becoming sufficiently large that it is no longer sensible or useful to adopt the assumption that TCP is a perfect, error-free transport at the application layer.

But the memory issue Michael raised is also rather important. The UNIX assumption that 'everything is a stream of bits' was a viable assumption at one point in time. But as we build larger and larger systems, that assumption also becomes weak. We need to think about how we store large quantities of data on SSD etc. as well. I already have RAIDs that are approaching 100TB and soon we will be at the point where PetaByte SSD stores are common.


If we want systems to work well, we cannot build systems that are hurling terabytes of data about in the exact same way that we used to build systems when a 20MB drive was the acme of luxury.

Rather than trying to make TCP/IP a flawless transport, we have to apply the same principle of making a lossy channel robust at the higher level.


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux