Re: Weird issue with planner choosing seq scan

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Feb 25, 2008, at 1:19 PM, Stephen Denne wrote:

So should I do a vacuum full and then hope this doesn't
happen again?
Or should I run a VACUUM FULL after each aggregation run?

If your usage pattern results in generating all of that unused space in one transaction, and no further inserts or updates to that table till next time you run the same process, then my guess is that you probably should run a vacuum full on that table after each aggregation run. In that case you wouldn't have to increase max_fsm_pages solely to keep track of large amount of unused space in that table, since you're cleaning it up as soon as you're generating it.

You earlier had 5.5 million row versions, 2 million of them dead but not yet removable, and you said (even earlier) that the table had 3.3 million rows in it. You now say you've got 6.2 million row versions (with the same 2M dead). So it looks like you're creating new row versions at quite a pace, in which case increasing max_fsm_pages, and not worrying about doing a vacuum full _every_ time is probably a good idea.

So 281727 should be the minimum I bump it to correct?




Have you checked Scott Marlowe's note:

unless you've got a long running transaction

How come those 2 million dead rows are not removable yet? My guess (based on a quick search of the mailing lists) would be that they were generated from your aggregation run, and that a long running transaction started before your aggregation run committed (possibly even before it started), and that transaction is still alive.

Alternatively, it may be a different 2 million dead row versions now than earlier, and may simply be a side effect of your particular usage, and nothing to worry about. (Though it is exactly the same number of rows, which strongly hints at being exactly the same rows.)


Great detective work, you are correct. We have a daemon that runs and is constantly adding new data to that table, then we aggregated it daily (I said weekly before, I was incorrect) - which deletes several rows as it updates a bunch of others. So it sounds like upping max_fsm_pages is the best option.

Thanks again everyone!



---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux