Re: Dirty reads on index scan,

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 22, 2023 at 5:44 AM Koen De Groote <kdg.dev@xxxxxxxxx> wrote:
Alright.

So, if I want to speed up the query, apart from trying to vacuum it beforehand, I suspect I've hit the limit of what this query can do?

It is more a limit on the system as a whole, not just one query.  How is this table being inserted?  updated?  deleted? Is the physical row order correlated on the insert_timestamp column (look at pg_stats.correlation)?  If not, why not? (Based on the name of the column, i would expect it to be highly correlated)

Did you try the VACUUM and if so did it work?  Knowing that might help us figure out what else might work, even if you don't want to do the vacuum.  But why not just do the vacuum?

You should show us the actual plans, not just selected excerpts from it.  There might be clues there that you haven't excerpted.  Turn on track_io_timing first if it is not on already.
 
Because, the table is just going to keep growing. And it's a usually a query that runs one time per day, so it's a cold run each time.

Why do you care if a query run once per day takes 1 minute to run?
 
Is this just going to get slower and slower and there's nothing that can be done about it?

It is probably not so much the size of the data (given that it is already far too large to stay in cache) as the number of dead tuples it had to wade through.  Having to read 16571 pages just to find 1000 tuples from a single-loop index scan suggests you have a lot of dead tuples.  Like, 16 for every live tuple.  Why do you have so many, and why isn't index micro-vacuuming cleaning them up?  Do you have long-running transactions which are preventing clean up?  Are you running this on a standby?

Cheers,

Jeff

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux