Re: Why we really can't use Facebook for technical discussion.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Mon, Jun 7, 2021 at 10:31 AM Keith Moore <moore@xxxxxxxxxxxxxxxxxxxx> wrote:

On 6/7/21 8:16 AM, Phillip Hallam-Baker wrote:

What we have here is the predictable result of a company that failed to take moderation seriously and is now desperately throwing technology at a problem rather than fixing the core problem that they designed their environment to maximize conflict because that was most profitable for them.

As much as I hate FB (I left the platform in 2016 and have never looked back) I think "failed to take moderation seriously" glosses over a number of inherent problems with social media, particularly when done on a large scale.  

One is of course that human moderation is time-consuming and therefore expensive if the moderators are paid.  It's also hard for a large number of human moderators (required to deal with large volumes of users and traffic) act uniformly.   On another widely used platform the moderators are almost completely arbitrary, despite supposedly enforcing a common set of clear standards.   So it's not surprising if social media platforms resort to algorithms.   And of course the algorithms are flawed because AI is a long way from understanding the many subtleties of human interaction.

Unpaid human moderators can be even more capricious than paid humans, because the desire to impose one's own prejudices on others is a strong motivator of volunteers.

Even under the best of conditions moderation (whether done by humans or machines) is dangerous both because it can easily squelch valuable input, and because it's often easily gamed for that purpose by people for whom such input is inconvenient.     No matter how noble the intent, the effect of moderation is often to favor established prejudices, or worse, to enable bullying.

I don't claim to know the solution, but I don't think it's a simple matter of "taking moderation seriously".


The main reason I am on the platform is to try and understand which approaches work and which do not. 

Slashdot has a rather less respectful audience yet their karma system produces vastly better results. But the biggest takeaway for me is that one size fits all does not work for curation or moderation.

Yes, this is a hard problem, that is why there should be more than one person who is the sole decider. I have never said anything on Facebook that would not pass in an Oxford Union debate[1]. We have known that content inspection is a failing strategy for spam filtering for over 15 years, why apply it to enforce automatic bans on individuals?

PHB


[1] OK that might be considered a low bar since I once heard Jacob Rees-Mogg addressed as 'Greased Frogg' but who is to say such sobriquet is truly undeserved?


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Mhonarc]     [Fedora Users]

  Powered by Linux