That's a good thought, maybe the stats are old and you have bad
plans? It could also be major updates to the data too (as opposed to
growth).
we have made checks for number of dead tuples etc recently, but looks
ok. and as "everything" in the database seems to be very slow atm, I
guess the problem is not caused by bad plans for specific tables/queries.
Gerhard, have you done an 'explain analyze' on any of your slow
queries? Have you done an analyze lately?
yes we added the 'auto_explain' module to log/analyze queries >= 5000ms.
a sample result from the logs (there is lots of stuff in the logs, I
selected this query because it is very simple):
2011-09-06 04:00:35 CEST ANWEISUNG: INSERT into
keywords.table_x_site_impact (content_id, site_impact_id, site_impact)
VALUES (199083087, 1, 1.000000)
2011-09-06 04:00:35 CEST LOG: Dauer: 15159.723 ms Anweisung: INSERT
into keywords.table_x_site_impact (content_id, site_impact_id,
site_impact) VALUES (199083087, 1 , 1.000000)
2011-09-06 04:00:35 CEST LOG: duration: 15159.161 ms plan:
Result (cost=0.00..0.01 rows=1 width=0) (actual
time=0.017..0.019 rows=1 loops=1)
Output:
nextval('keywords.table_x_site_impact_internal_id_seq'::regclass),
199083087::bigint, 1::smallint, 1::double precision
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance