On Fri, Jul 24, 2020 at 10:16:24AM +0200, Stephane Bortzmeyer wrote: > And this is also why it cannot be implemented in tools. Even the best > AI cannot know if the use of a word like master is oppressive or not. And we also know that language policing can be an oppressive and exclusionary tool, and a great deal of caution and discretion is required to avoid that outcome. http://paulgraham.com/orth.html Indeed at the IETF what is probably most oppressive and exclusionary is clubbiness and group-think by the established cliques in working groups, that makes outsiders unwelcome because of their outsider perspective. Policing of language may well reinforce that dynamic, and make the IETF even more of an exclusive club than it already is. Policing of language creates a climate of fear and empowers politically strident voices at both ends of the spectrum. It can turn all speech political, and can backfire by amplifying conciousness of ethnic/racial/gender distinctions and by stoking resentment. The cited evidence that technical terms play a meaningful role in deterring participation by under-represented groups looks rather anecdotal. For more credible deterrents look at lack of educational opportunities, cultural attitudes to interests technology (popular terms such as nerds, geeks, ...), perceived employment prospects, absence of role models etc. To see the inefficacy of language policing, contrast the historical official discourse of soviet-block countries, with its purported internationalism, denunciation of racism, etc. with the reality that xenophobia was and still is much more prevalent in eastern Europe than in liberal democracies to the west. Of course wide disconnects between official ideology and reality are not limited to the Soviet block, they are rather a fundamental feature of totalitarian systems. The more removed the ideals from actual practice, the more power to arbitrarily impose penalties when convenient. Mere good intentions don't always produce good outcomes. I do not impute ill motives to those who are trying to make the world a more just place, but I am rather sceptical that the proposal at hand does more good than harm. I am quite sure that it is exclusionary to those who see the tradeoffs in a different light, and are justly afraid to speak up given the current political climate. At the same time, the bad words to be banned are only speculatively and tenuously exclusionary for the under-represented groups the proposal is intended to support. A great deal of caution is therefore appropriate in moving forward in the proposed direction. Where the existing terms of art are unclear, and alternatives are better and/or more widely understood, working groups can and often will encourage authors to choose clearer terms. Otherwise, absent manifestly provocative or hostile intent (we'll generally only know it when we see it), it is likely best to not police language merely on the grounds that it uses words that out of context are tenuously connected to (might remind one of) a present or historical injustice. -- Viktor.