On 6/10/2024 11:49 PM, Rob Sayre wrote:
Me: why not use AI to monitor a problematic poster on a mailing list?
ChatGPT:
Using AI to monitor problematic behavior on a mailing list is definitely
a possibility and has been explored in various contexts. AI can help in
several ways:
1. **Content Moderation**: AI algorithms can scan messages for
inappropriate content, such as hate speech, harassment, or spam, and
flag them for review by moderators >
2. **Behavior Analysis**: AI can analyze patterns of behavior, such as
frequency of posts, language used, or interactions with other members,
to identify potential problems or conflicts.
3. **Early Detection**: AI can detect emerging issues or conflicts
before they escalate, allowing moderators to intervene early and prevent
further disruption.
4. **User Profiling**: AI can create profiles of individual users based
on their posting history and behavior, which can help moderators
identify repeat offenders or problematic users.
If all that is true, AI should also be able to block spam. Yet, Google
and Microsoft, and probably others, have been using machine learned
algorithms to fight spam for a long time, and spam is still there.
Part of the problem is that the spammers can also use the same AI to
design messages that are not flagged by the AI -- adversarial AI, etc.
If someone wants to attack the IETF, they can do that too. Use AI to
design messages that somehow do not break the letter of the rules.
In general, any use of machine learning needs to assume that the
algorithm will sometime be wrong, such as flagging as an attack
something that is not. And then have procedures in place to correct the
inevitable mistakes. And guess what, these procedures will look very
much like the current style of content moderation.
Plus, the idea of having machines profiling people as a somewhat
orwellian feel in it. I would rather not see the IETF go on that path.
We seem to get a couple of trolls popping out every year or two. Humans
can deal with that humanly.
-- Christian Huitema