On Mon, 2007-02-19 at 11:50 +0100, Andreas Tille wrote: > Hi, > > I'm running a web application using Zope that obtains all data > from a PostgreSQL 7.4 database (Debian Sarge system with package > 7.4.7-6sarge4 on an "older" Sparc machine, equipped with 2GB Upgrade to 8.2.3 if possible, or at least to 7.4.16. This is a basic question, but do you VACUUM ANALYZE regularly? 7.4 is before autovacuum was integrated in the core. If you don't do this you could have a lot of wasted space in your tables causing unneeded I/O, and the planner might be making bad plans. > memory and two processors E250 server). Once I did some performance > tuning and found out that > > max_connections = 256 > shared_buffers = 131072 > sort_mem = 65536 > You're allocating 50% of the physical memory to shared buffers. That's not necessarily too much, but that's on the high side of the normal range. Does the total size of all of your tables and indexes add up to enough to exhaust your physical memory? Check to see if you have any exceptionally large tables or indexes. You can do that easily with pg_relation_size('a_table_or_index') and pg_total_relation_size ('a_table'). > Since about two weeks the application became *drastically* slower > and I urgently have to bring back the old performance. As I said > I'm talking about functions accessing tables that did not increased > over several years and should behave more or less the same. > > I wonder whether adding tables and functions could have an influence > on other untouched parts and how to find out what makes the things > slow that worked for years reliable and satisfying. My first try You need to provide queries, and also define "slower". Set log_min_duration_statement to some positive value (I often start with 1000) to try to catch the slow statements in the logs. Once you have found the slow statements, do an EXPLAIN and an EXPLAIN ANALYZE on those statements. That will tell you exactly what you need to know. Regards, Jeff Davis