Few weeks ago tested a customer application on 16 cores with Oracle: - 20,000 sessions in total - 70,000 queries/sec without any problem on a mid-range Sun box + Solaris 10.. Rgds, -Dimitri On 6/3/09, Kevin Grittner <Kevin.Grittner@xxxxxxxxxxxx> wrote: > James Mansion <james@xxxxxxxxxxxxxxxxxxxxxx> wrote: > >> I'm sure most of us evaluating Postgres from a background in Sybase >> or SQLServer would regard 5000 connections as no big deal. > > Sure, but the architecture of those products is based around all the > work being done by "engines" which try to establish affinity to > different CPUs, and loop through the various tasks to be done. You > don't get a context switch storm because you normally have the number > of engines set at or below the number of CPUs. The down side is that > they spend a lot of time spinning around queue access to see if > anything has become available to do -- which causes them not to play > nice with other processes on the same box. > > If you do connection pooling and queue requests, you get the best of > both worlds. If that could be built into PostgreSQL, it would > probably reduce the number of posts requesting support for bad > configurations, and help with benchmarks which don't use proper > connection pooling for the product; but it would actually not add any > capability which isn't there if you do your homework.... > > -Kevin > > -- > Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance > -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance