On 06/10/2011 08:56 PM, Anibal David Acosta wrote:
The version is Postgres 9.0
Yes, I setup the postgres.conf according to instructions in the
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server
Cool, I will check this
http://wiki.postgresql.org/wiki/Logging_Difficult_Queries
Looks like great starting point to find bottleneck
But so, Is possible in excellent conditions that two connections duplicate the quantity of transactions per second?
For two connections, if you have most of the data cached in RAM or you
have lots of fast disks, then sure. For that matter, if they're
synchronized scans of the same table then the second transaction might
perform even faster than the first one!
There are increasing overheads with transaction synchronization, etc
with number of connections, and they'll usually land up contending for
system resources like RAM (for disk cache, work_mem, etc), disk I/O, and
CPU time. So you won't generally get linear scaling with number of
connections.
Greg Smith has done some excellent and detailed work on this. I highly
recommend reading his writing, and you should consider buying his recent
book "PostgreSQL 9.0 High Performance".
See also:
http://wiki.postgresql.org/wiki/Performance_Optimization
There have been lots of postgresql scaling benchmarks done over time,
too. You'll find a lot of information if you look around the wiki and
Google.
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance