On Wed, Nov 05, 2008 at 07:51:24PM +0200, Andrus wrote: > I have Server running on Windows XP using > PostgreSQL 8.1.4 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 > (mingw-special) You really could do with updating that; 8.1.4 is very old. 8.1.15 is the latest in the 8.1 series and has lots of bug fixes. > Db size is 862 MB > > Bigger tables: > 1 1214 pg_shdepend 775 MB > 2 1232 pg_shdepend_depender_index 285 MB > 5 1233 pg_shdepend_reference_index 156 MB those look scary, scary big to me. Have you been running without autovacuum for a while and creating *lots* of tables or something? > I ran VACUUM ANALYZE. > It returns > > INFO: free space map contains 22501 pages in 77 relations > DETAIL: A total of 20000 page slots are in use (including overhead). > 111216 page slots are required to track all free space. > Current limits are: 20000 page slots, 1000 relations, using 186 KB. > NOTICE: number of page slots needed (111216) exceeds max_fsm_pages (20000) > HINT: Consider increasing the configuration parameter "max_fsm_pages" to a > value over 111216. > Query returned successfully with no result in 201099 ms. It's saying that there's a lot of tables with unused space in them. If you've deleted lots of stuff from the database then this will be normal, but because you didn't mention many details I'd assume this probably isn't right. VACUUM FULL and then lots of REINDEXing would be one solution, but this probably isn't the easiest. > How to speed up this query ? > > Should I set max_fsm_pages to a 113000 or other suggestions ? Doing this will cause the error message to go away, but it's not going to solve the underlying problem. Your database looks quite bloated; if you can afford the downtime I'd be tempted to do a full backup and restore. This will reduce bloat a lot and also provide a good opportunity to update PG. The good thing about doing it from a restore is that you don't have to go through REINDEXing everything by hand and potentially miss lots of things out. If things are going to shrink a lot, restoring is normally quicker as well. A good way to test would be to do a backup and see how big the resulting file is. I'd expect the database to be three or four times the size of the plain text backup (depending on table design and index use it can vary quite a bit either way), so if the dump is less than a hundred MB you're probably better off doing a restore. Sam -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general