On 12/16/05, Moritz Bayer <moritz.bayer@xxxxxxxxxxxxxx> wrote: > This is really weird, just a few hours ago the machine run very smooth > serving the data for a big portal. Can you log the statements that are taking a long time and post them to the list with the table structures and indexes for the tables being used. To do this turn on logging for statements taking a long time, edit postgresql.conf file and change the following two parameters. log_min_duration_statement = 2000 # 2 seconds Your log should now be catching the statements that are slow. Then use the statements to get the explain plan ie dbnamr=# explain [sql thats taking a long time] We would also need to see the table structures. dbname=# \d [table name of each table in above explain plan] > Has anybody an idea what might have happened here? > I need a quick solution, since I'm talking about an live server that should > be running 24 hours a day. It may be that the planner has started to pick a bad plan. This can happen if the database is regularly changing and the stats are not up to date. I believe it can happen even if the stats are up to date but is much less likely to do so. It might also be an idea to vacuum the database. dbname=# VACUUM ANALYZE; This will load the server up for a while though. -- http://www.hjackson.org http://www.uklug.co.uk