> I have a function in pgsql language, this function do some select to some > tables for verify some conditions and then do one insert to a table with > NO > index. Update are not performed in the function > > When 1 client connected postgres do 180 execution per second > With 2 clients connected postgres do 110 execution per second > With 3 clients connected postgres do 90 execution per second > > Finally with 6 connected clients postgres do 60 executions per second > (totally 360 executions per second) > > While testing, I monitor disk, memory and CPU and not found any overload. There's always a bottleneck - otherwise the system might run faster (and hit another bottleneck eventually). It might be CPU, I/O, memory, locking and maybe some less frequent things. > I know that with this information you can figure out somethigns, but in > normal conditions, Is normal the degradation of performance per connection > when connections are incremented? > Or should I spect 180 in the first and something similar in the second > connection? Maybe 170? > > > The server is a dual xeon quad core with 16 GB of ram and a very fast > storage > The OS is a windows 2008 R2 x64 Might be, but we need more details about how the system works. On Linux I'd ask for output from 'iostat -x 1' and 'vmstat 1' but you're on Windows so there are probably other tools. What version of PostgreSQL is this? What are the basic config values (shared_buffers, work_mem, effective_cache_size, ...)? Have you done some tuning? There's a wiki page about this: http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server Have you tried to log slow queries? Maybe there's one query that makes the whole workload slow? See this: http://wiki.postgresql.org/wiki/Logging_Difficult_Queries Tomas -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance