On Fri, Jul 19, 2024 at 3:22 AM Laurenz Albe <laurenz.albe@xxxxxxxxxxx> wrote:
I have no problem with that definition, but it is useless as a policy:
Even in a blog with glaring AI nonsense in it, how can you prove that the
author did not actually edit and improve other significant parts of the text?
Well, we can't 100% prove it, but we can have ethical guidelines. We already have other guidelines that are open to interpretation (and plenty of planet posts bend the rules quite often, IMO, but that's another post).
Why not say that authors who repeatedly post grossly counterfactual or
misleading content can be banned?
Banned is a strong word, but certainly they can have the posts removed, and receive warnings from the planet admins. If the admins can point to a policy, that helps. Perhaps as you hint at, we need a policy to not just discourage AI-generated things, but also wrong/misleading things in general (which was not much of a problem before LLMs arrived, to be honest).
Cheers,
Greg