Search Postgresql Archives

Re: Planet Postgres and the curse of AI

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg

I agree with you on the misuse of AI based tools, as per my experience with Postgres the solutions suggested wont work at times.
Its not bad to get help from these tools but put all the solutions from there is counter productive.
I think People should take care while using these tools while suggesting solutions for real world problems.

Regards
Kashif Zeeshan

On Wed, Jul 17, 2024 at 10:22 PM Greg Sabino Mullane <htamfids@xxxxxxxxx> wrote:
I've been noticing a growing trend of blog posts written mostly, if not entirely, with AI (aka LLMs, ChatGPT, etc.). I'm not sure where to raise this issue. I considered a blog post, but this mailing list seemed a better forum to generate a discussion.

The problem is two-fold as I see it.

First, there is the issue of people trying to game the system by churning out content that is not theirs, but was written by a LLM. I'm not going to name specific posts, but after a while it gets easy to recognize things that are written mostly by AI.

These blog posts are usually generic, describing some part of Postgres in an impersonal, mid-level way. Most of the time the facts are not wrong, per se, but they lack nuances that a real DBA would bring to the discussion, and often leave important things out. Code examples are often wrong in subtle ways. Places where you might expect a deeper discussion are glossed over.

So this first problem is that it is polluting the Postgres blogs with overly bland, moderately helpful posts that are not written by a human, and do not really bring anything interesting to the table. There is a place for posts that describe basic Postgres features, but the ones written by humans are much better. (yeah, yeah, "for now" and all hail our AI overlords in the future).

The second problem is worse, in that LLMs are not merely gathering information, but have the ability to synthesize new conclusions and facts. In short, they can lie. Or hallucinate. However you want to call it, it's a side effect of the way LLMs work. In a technical field like Postgres, this can be a very bad thing. I don't know how widespread this is, but I was tipped off about this over a year ago when I came across a blog suggesting using the "max_toast_size configuration parameter". For those not familiar, I can assure you that Postgres does not have, nor will likely ever have, a GUC with that name.

As anyone who has spoken with ChatGPT knows, getting small important details correct is not its forte. I love ChatGPT and actually use it daily. It is amazing at doing certain tasks. But writing blog posts should not be one of them.

Do we need a policy or a guideline for Planet Postgres? I don't know. It can be a gray line. Obviously spelling and grammar checking is quite okay, and making up random GUCs is not, but the middle bit is very hazy. (Human) thoughts welcome.

Cheers,
Greg


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux