Re: Really really slow select count(*)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 02/04/2011 10:03 AM, felix wrote:

  max_fsm_pages                   | 153600             | Sets the
maximum number of disk pages for which free space is tracked.
  max_fsm_relations               | 1000               | Sets the
maximum number of tables and indexes for which free space is tracked.

how do I determine the best size or if that's the problem ?

Well, the best way is to run:

vacuumdb -a -v -z &>vacuum.log

And at the end of the log, it'll tell you how many pages it wants, and how many pages were available.

From the sounds of your database, 150k is way too small. If a single table is getting 10-50k updates per day, it's a good chance a ton of other tables are getting similar traffic. With max_fsm_pages at that setting, any update beyond 150k effectively gets forgotten, and forgotten rows aren't reused by new inserts or updates.

Your database has probably been slowly expanding for months without you realizing it. The tables that get the most turnover will be hit the hardest, as it sounds like what happened here.

You can stop the bloating by setting the right max_fsm_pages setting, but you'll either have to go through and VACUUM FULL every table in your database, or dump/restore to regain all the lost space and performance (the later would actually be faster). Before I even touch an older PostgreSQL DB, I set it to some value over 3-million just as a starting value to be on the safe side. A little used memory is a small price to pay for stopping gradual expansion.

Your reindex was a good idea. Indexes do sometimes need that. But your base tables need work too. Unless you're on 8.4 or above, auto_vacuum isn't enough.

Just to share an anecdote, I was with a company about five years ago and they also used the default max_fsm_pages setting. Their DB had expanded to 40GB and was filling their disk, only a couple weeks before exhausting it. I set the max_fsm_pages setting to 2-million, set up a bunch of scripts to vacuum-full the tables from smallest to largest (to make enough space for the larger tables, you see) and the database ended up at less than 20GB.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
sthomas@xxxxxxxxx

______________________________________________

See  http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux