--On Saturday, June 29, 2024 09:54 -0400 Keith Moore <moore@xxxxxxxxxxxxxxxxxxxx> wrote:On 6/29/24 08:20, John C Klensin wrote:From my point of view, this is mostly about the IETF's credibility when we establish standards and at least implicitly encourage others to use them. If we do not or cannot use those standards in our own work, and avoid doing so without public and understandable explanations, it calls everything we are doing into question.+1 But I want to go further than that. If IETF insists that it needs to outsource essential services that are based on IETF protocols, that doesn't speak well for those protocols.Keith, The reason I am suggesting a careful, public, and fairly prominent explanation for our decision to outsource to an organization that does not support IPv6 (for email in this case) is precisely to separate the protocols and their quality/ usability/ appropriateness from what is essentially an administrative (maybe economic as well as management) decision. I think it is also appropriate for us to make it clear that we would be much happier with the chosen vendor/ supplier if they supported mail (and other protocols) over IPv6 even if other considerations (including "no one we could find is any better about that") dictate that we choose them anyway.
Not buying it, or at least, I disagree.
Let me first say that I find it highly inappropriate for such decisions to be made without input from the broader community, since basically those choices present a danger of undermining IETF's core mission.
It seems to me that, in the above, you are making a different argument, i.e., that the IETF should not be outsourcing _any_ important ("essential") service that runs on the Internet since substantially every protocol is either our work or resting on it.
I'm not actually making that argument. I see a big difference between outsourcing services used by the general public (which has become MUCH larger in, say, the past 20 years), versus outsourcing services used primarily by IETF participants, which is I believe somewhat smaller than it has been at some points in the past.
While I got started a bit before you did, I was convinced for many years that one of the main attractions of the Internet architecture (and, to only a slightly lesser degree, the ARPANET with NCP and friends before it) was that they enabled local services and peer to peer protocols rather than relatively tiny client machines and giant, sometimes distributed, servers. I am still not convinced that putting everything into centrally managed "cloud" servers is the right thing to do, especially when I see efforts to change and optimize protocols for that model in ways that could impede or disadvantage more distributed models.. I even draw some comfort from seeing things as cyclic, going back to the early iterations of single whole-institution (or cluster of institutions) servers evolving toward departmental machines and then to departments discovering they didn't really want to be in the computer operations business and centralizing some things as a result. But it is unquestionably today's reality and it is not at all clear to me that asking the IETF LLC to fight that reality (and presumably expand its staff even more to do so) would be in the best interests of either the IETF or the Internet more generally.
Anytime someone cites something as "reality" it should raise a red flag. It's too easy to make the leap from such a label to conclude that "reality" isn't malleable, it cannot change, it's beyond our control or influence. We should at least be asking "Why?" is that reality (if indeed it is), and "What can/should we do about it?"
(Or do we just want to throw in the towel say "the Internet was nice while it lasted", and we might as well let it atrophy? Because I see far too much of that attitude in IETF these days, and it's appalling.)
Keith