Re: why IPv6 is bad, No, SMTP is IPv4, Was: SMTP and IPv6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/3/24 08:05, Phillip Hallam-Baker wrote:

So, about defeating traffic analysis...

I really don't understand why there is this fetish for keeping the IP address the same from endpoint to endpoint. In 1985, the end points typically weighed 800lb and were not likely to move. A user was not going to switch networks during a call.

Today, being able to keep the transport connection going when the network connection changes is table stakes for new proposals. And that means the IP addresses are going to be in a state of flux. If you don't want pervasive surveillance knowing who is talking to whom, you want to obliterate any information that might help an attacker.
If IP addresses are changing during the typical lifetime of a TCP connection, things are worse than I thought.   For longer term associations, I'd agree with you. [*]  But this is because it's far more likely that at least one of the endpoints will be nomadic, as it is that the network itself will need to change addresses of one or more endpoints.

I think you're mixing lots of different things.    The problems with NATs are well known by now and aren't limited to the fact that the IP addresses change between the endpoints.   In short they make applications much less reliable and more expensive to operate than necessary.   But even in a network with universal global addressing and no NATs and addresses kept the same end-to-end, some of those problems would still exist, due to mobility and middleboxes at least.   That's not an argument in favor of NATs.


Architecture is a dynamic concept, not something that is static. At one point, the network in my house was running off the MAC addresses for local routing. But what happens when I switch on the SDN? Now we have an IP network running on top of an IP network.

That's almost the exact opposite of the meaning of that word.   Architecture is a set of deliberate high-level design decisions.   That doesn't mean that it cannot evolve over time, but it shouldn't happen accidentally. 

In many ways IETF (including IAB) has failed to maintain the Internet architecture and respond to changing conditions, with the result that the Internet is currently unpredictable to a large degree, and this harms the ability of the Internet to support applications and users.   I think this happened because IETF wants to treat everything like a working group, and working groups (especially siloed working groups) aren't a good structure for making architectural decisions.   Also, it's widely (and correctly) understood that architectural decisions can be shortsighted, so rather than take the risk of making poor architectural decisions, people have just thrown up their hands and given up.  I'm not sure how to solve this problem with IETF, but the lack of direction and foresight for the Internet architecture are really starting to show.

Taking this specific example, if the best way that receiver SMTPs have to determine a sender's reputation involves looking at the source IP address, when we've known for pretty much the entire history of the Internet that this would never work well at scale, but we haven't addressed the problem yet in any significant way, ...   NATs only made this problem worse, but the problem has always been lurking waiting to bite us.

Keith

[*] then again, I remember times when I'd routinely keep FTP sessions up for well over 24 hours, and I don't see why that's not still reasonable.


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux