On Mon, Sep 11, 2023 at 11:07 PM David Rowley <dgrowleyml@xxxxxxxxx> wrote:
On Tue, 12 Sept 2023 at 02:27, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
>
> David Rowley <dgrowleyml@xxxxxxxxx> writes:
> > I'm not sure if you're asking for help here because you need planning
> > to be faster than it currently is, or if it's because you believe that
> > planning should always be faster than execution. If you think the
> > latter, then you're mistaken.
>
> Yeah. I don't see anything particularly troubling here. Taking
> circa three-quarters of a millisecond (on typical current hardware)
> to plan a four-way join on large tables is not unreasonable.
I took a few minutes to reverse engineer the tables in question (with
assistance from an AI bot) and ran the query in question.
Unsurprisingly, I also see planning as slower than execution, but with
a ratio of about planning being 12x slower than execution vs the
reported ~18x.
Planning Time: 0.581 ms
Execution Time: 0.048 ms
Nothing alarming in perf top of executing the query in pgbench with -M
simple. I think this confirms the problem is just with expectations.
Yep. Very fast executing queries often have faster execution than plan times. Postgres has a really dynamic version of SQL, for example, operator overloading for example, which probably doesn't help things. This is just the nature of SQL really. To improve things, just use prepared statements -- that's why they are there.
Aside, this style of SQL as produced for this test, guids, and record at a time thinking, is also not my cup of tea. There are some pros to it, but it tends to beat on a database. If you move this logic into the database, this kind of problem tends to evaporate. It's a very curious mode of thinking I see, that in order to "reduce load on the database", it is asked to set up and tear down a transaction for every single record fetched :).
merlin