Ok, I think I found possible bottleneck. The function that do some selects run really fast, more than 1.000 executions per seconds But the whole thing slowdown when update of one record in a very very small table happed I test with insert instead of update and same behavior occur. So, the only way to go up is turning off synchronous_commit, but it can be dangerous. Any, thanks a lot for your time. Anibal -----Mensaje original----- De: pgsql-performance-owner@xxxxxxxxxxxxxx [mailto:pgsql-performance-owner@xxxxxxxxxxxxxx] En nombre de Greg Smith Enviado el: viernes, 10 de junio de 2011 12:50 p.m. Para: pgsql-performance@xxxxxxxxxxxxxx Asunto: Re: how much postgres can scale up? On 06/10/2011 07:29 AM, Anibal David Acosta wrote: > When 1 client connected postgres do 180 execution per second With 2 > clients connected postgres do 110 execution per second With 3 clients > connected postgres do 90 execution per second > > Finally with 6 connected clients postgres do 60 executions per second > (totally 360 executions per second) > > While testing, I monitor disk, memory and CPU and not found any overload. > > I know that with this information you can figure out somethigns, but > in normal conditions, Is normal the degradation of performance per > connection when connections are incremented? > Or should I spect 180 in the first and something similar in the second > connection? Maybe 170? > Let's reformat this the way most people present it: clients tps 1 180 2 220 3 270 6 360 It's common for a single connection doing INSERT statements to hit a bottleneck based on how fast the drives used can spin. That's anywhere from 100 to 200 inserts/section, approximately, unless you have a battery-backed write cache. See http://wiki.postgresql.org/wiki/Reliable_Writes for more information. However, multiple clients can commit at once when a backlog occurs. So what you'll normally see in this situation is that the rate goes up faster than this as clients are added. Here's a real sample, from a server that's only physically capable of doing 120 commits/second on its 7200 RPM drive: clients tps 1 107 2 109 3 163 4 216 5 271 6 325 8 432 10 530 15 695 This is how it's supposed to scale even on basic hardware You didn't explore this far enough to really know how well your scaling is working here though. Since commit rates are limited by disk spin in this situation, the situation for 1 to 5 clients is not really representative of how a large number of clients will end up working. As already mentioning, turning off synchronous_commit should give you an interesting alternate set of numbers. It's also possible there may be something wrong with whatever client logic you are using here. Something about the way you've written it may be acquiring a lock that blocks other clients from executing efficiently for example. I'd suggest turning on log_lock_waits and setting deadlock_timeout to a small number, which should show you some extra logging in situations where people are waiting for locks. Running some queries to look at the lock data such as the examples at http://wiki.postgresql.org/wiki/Lock_Monitoring might be helpful too. -- Greg Smith 2ndQuadrant US greg@xxxxxxxxxxxxxxx Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance