Thanks for the information Claus, Why would reducing the effective
cache size help the processor usage? It seems that there is plenty of
resources on the box although I can see that 10MB of sort space could
mount up if we had 500 connections but at the moment we do not have
anything like that number. Thanks Matthew. Claus Guttesen wrote: I have a 4 * dual core 64bit AMD OPTERON server with 16G of RAM, running postgres 7.4.3. This has been recompiled on the server for 64 stored procedure parameters, (I assume this makes postgres 64 bit but are not sure). When the server gets under load from database connections executing reads, lets say 20 - 40 concurrent reads, the CPU's seem to limit at about 30-35% usage with no iowait reported. If I run a simple select at this time it takes 5 seconds, the same query runs in 300 millis when the server is not under load so it seems that the database is not performing well even though there is plenty of spare CPU. There does not appear to be large amounts of disk IO and my database is about 5.5G so this should fit comfortably in RAM. changes to postgresql.sql: max_connections = 500 shared_buffers = 96000 sort_mem = 10240 effective_cache_size = 1000000 Does anyone have any ideas what my bottle neck might be and what I can do about it?You might want to lower shared_buffers to a lower value. Mine is set at 32768. Is your db performing complex sort? Remember that this value is per connection. Maby 1024. effective_cache_size should also be lowered to something like 32768. As far as I understand shared_buffers and effective_cache_size have to be altered "in reverse", ie. when lowering one the other can be raised. HTH. -- Matthew Lunnon Technical Consultant RWA Ltd. mlunnon@xxxxxxxxxxxxx Tel: +44 (0)29 2081 5056 www.rwa-net.co.uk -- |