On Fri, Jan 15, 2021 at 12:27 PM Michael Lewis <mlewis@xxxxxxxxxxx> wrote:
On Fri, Jan 15, 2021 at 10:22 AM Alexander Stoddard <alexander.stoddard@xxxxxxxxx> wrote:The 'fast plans' use parallel seq scans. The 'slow plans' is using index scans. It appears a good query plan correctly predicts it should be bulk processing the tables but bad ones get fooled into trashing (hard disk, not SSD) by mispredicting too few rows to join between the tables.How many tables are involved?
The
queries are complex, multiple joins to 10 plus tables, although most
are to tiny enum type lookup tables. I believe it is the join between
the two large tables that I have described that causes the issue, and
that seems to be reflected in the different strategies in the plans. For
my own learning and to clarify the problem I probably will have to try
and reproduce the behavior in a test case.
Are you sure it is stats getting updated causing the change in behavior?
No
I'm not sure, could something else flip a plan after an ANALYZE?
Differing performance of multiple runs of the same query could be due
caching etc. but that would be a timing difference without a change in
query plan. The output plans I see are radically different and correlate
with large magnitude performance changes.
Are you hitting the genetic optimizer?
I am doing nothing to specify the optimizer. Do I have configurable options in that regard? I was unaware of them.
Thank you,
Alex
On Fri, Jan 15, 2021 at 12:27 PM Michael Lewis <mlewis@xxxxxxxxxxx> wrote:
On Fri, Jan 15, 2021 at 10:22 AM Alexander Stoddard <alexander.stoddard@xxxxxxxxx> wrote:The 'fast plans' use parallel seq scans. The 'slow plans' is using index scans. It appears a good query plan correctly predicts it should be bulk processing the tables but bad ones get fooled into trashing (hard disk, not SSD) by mispredicting too few rows to join between the tables.How many tables are involved? Are you sure it is stats getting updated causing the change in behavior? Are you hitting the genetic optimizer?