On Thu, 4 Jul 2024 at 08:10, Keith Moore <moore@xxxxxxxxxxxxxxxxxxxx> wrote:
If IP addresses are changing during the typical lifetime of a TCP connection, things are worse than I thought. For longer term associations, I'd agree with you. [*] But this is because it's far more likely that at least one of the endpoints will be nomadic, as it is that the network itself will need to change addresses of one or more endpoints.
Well, yes - a TCP connection lasts, like a wizard, precisely as long as it needs to.
It's not just that a device is nomadic in the sense of "it moves dramatically from one geographic position to another", it's that a device is most likely connected wirelessly. While wireless connections are essentially wonderous magic, they also fade in and out, and devices might use 4 (or more) Gs in between Access Points, and different Access Points might not be on the same network (or preserve the IP address even when they are or preserve the NAT data even when they do).
Much of the reason why TCP sessions typically don't outlast the physical connectivity is because we've got ourselves used to this, and designed protocols accordingly. Those of us who've spent a long time on protocols like IMAP and XMPP, which benefit from lengthy connections and don't rely on mobile push services to paper the cracks, have hit these problems quite a bit (see QRESYNC, XEP-0198, XEP-0388 - honestly it forms the bulk of my aimless scribbling).
In addition, we end up with wasteful network traffic that is only there to defeat NAT timeouts, with application-level pings and keepalives.
As Mans says, this is a significant feedback loop, and many developers have simply never experienced this due to the many layers of workaround we typically employ, which is why you now see things like JMAP arriving, which - inter alia - switches to HTTP because nothing else works anymore, and moreover, hosting at places like AWS is significantly easier and cheaper to do if all you want to do is base-level HTTP/1.1 (they may provide an HTTP/2 or HTTP/3 load balancer, but they'll talk to you only using HTTP/1.1).
What's fun is that this in turn means that we have efforts trying to rebuild TCP-like functionality into HTTP (Websockets, WebTrans, etc) but these are unlikely to take off either, because they, like TLS, will be terminated at the load balancer.
In turn, any unilateral flow of data from a service to a consumer - what we now call "push" - ends up being transported entirely out of band via the consumer device manufacturer's system. Luckily this is mentioned aloud within the IETF only with hushed mumbling, otherwise we might have to consider the privacy effects this has, because every datagram over push seems to contain the device and service. The only thing worse would be centralising all DNS queries to a handful of huge providers, but luckily nobody would ever allow that to happen, especially not the same providers as the mobile push ones.
This, of course, leads directly into your comments on architecture (or the lack of it), which I will respond to separately if I can work up the energy...
[*] then again, I remember times when I'd routinely keep FTP sessions up for well over 24 hours, and I don't see why that's not still reasonable.
Mostly it's unreasonable because it's FTP, which is rather like using TOPS-20. It's an unmaintained protocol, and almost certainly not the right choice unless you just want to use it for the perverse nerdy pleasure of using an ancient system. And yes, I was involved in some of the last work on that, too - indeed, the first IETF work I did was around FTP and NAT traversal.
Dave.