The current forms of “AI” have no concept of state or long term memory. On each invocation of the AI you have to tell it,
This is a Postgres database.
This is my database schema.
These are the indexes I have.
After providing that information the “AI” “might” generate a valid query for your particular database but it won’t be optimum. The AI doesn’t know how many rows are in each table, the physical media each table is on, or any other attributes about your database that would be used to calculate the cost of using an index or a table scan.
So then you could make the jump that an “AI” should be ran locally and trained exclusively on your database. Now you are using a general purpose “AI” algorithm for a very specific task which would not be optimum. It would require constant retraining which would be computationally expensive.
Then let’s say you want to write an “AI” algorithm just for Postgres. Now you have basically rewritten the current Postgres Optimizer in a round about way.
On Sat, Jun 22, 2024 at 09:40 Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote:
On 6/22/24 04:50, Andreas Joseph Krogh wrote:
> Hi, are there any plans for using some kind of AI for query-planning?
>
> Can someone with more knowledge about this than I have please explain
> why it might, or not, be a good idea, and what the challenges are?
1) Require large amount of resources.
2) Produce high rate of incorrect answers.
>
> Thanks.
>
> --
> *Andreas Joseph Krogh*
> CTO / Partner - Visena AS
> Mobile: +47 909 56 963
> andreas@xxxxxxxxxx <mailto:andreas@xxxxxxxxxx>
> www.visena.com <https://www.visena.com>
> <https://www.visena.com>
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx