Re: how much postgres can scale up?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/10/2011 07:29 PM, Anibal David Acosta wrote:

I know that with this information you can figure out somethigns, but in
normal conditions, Is normal the degradation of performance per connection
when connections are incremented?

With most loads, you will find that the throughput per-worker decreases as you add workers. The overall throughput will usually increase with number of workers until you reach a certain "sweet spot" then decrease as you add more workers after that.

Where that sweet spot is depends on how much your queries rely on CPU vs disk vs memory, your Pg version, how many disks you have, how fast they are and in what configuration they are in, what/how many CPUs you have, how much RAM you have, how fast your RAM is, etc. There's no simple formula because it's so workload dependent.

The usual *very* rough rule of thumb given here is that your sweet spot should be *vaguely* number of cpu cores + number of hard drives. That's *incredibly* rough; if you care you should benchmark it using your real workload.

If you need lots and lots of clients then it may be beneficial to use a connection pool like pgbouncer or PgPool-II so you don't have lots more connections trying to do work at once than your hardware can cope with. Having fewer connections doing work in the database at the same time can improve overall performance.

--
Craig Ringer

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux