On 6/7/21 8:16 AM, Phillip Hallam-Baker wrote:
What we have here is the predictable result of a company that failed to take moderation seriously and is now desperately throwing technology at a problem rather than fixing the core problem that they designed their environment to maximize conflict because that was most profitable for them.
As much as I hate FB (I left the platform in 2016 and have never
looked back) I think "failed to take moderation seriously" glosses
over a number of inherent problems with social media, particularly
when done on a large scale.
One is of course that human moderation is time-consuming and
therefore expensive if the moderators are paid. It's also hard
for a large number of human moderators (required to deal with
large volumes of users and traffic) act uniformly. On another
widely used platform the moderators are almost completely
arbitrary, despite supposedly enforcing a common set of clear
standards. So it's not surprising if social media platforms
resort to algorithms. And of course the algorithms are flawed
because AI is a long way from understanding the many subtleties of
human interaction.
Unpaid human moderators can be even more capricious than paid humans, because the desire to impose one's own prejudices on others is a strong motivator of volunteers.
Even under the best of conditions moderation (whether done by
humans or machines) is dangerous both because it can easily
squelch valuable input, and because it's often easily gamed for
that purpose by people for whom such input is inconvenient. No
matter how noble the intent, the effect of moderation is often to
favor established prejudices, or worse, to enable bullying.
I don't claim to know the solution, but I don't think it's a simple matter of "taking moderation seriously".
Keith