On 10/18/11 9:51 AM, Bill Moran wrote:
Basically we wanted to limit the number of processes so that client code doesn't have to retry for unavailability for connection or sub processes , but postgre takes care of queuing?
pgpool and pgbouncer handle some of that, but I don't know if they do
exactly everything that you want. Probably a good place to start, though.
pools work great when you have a lot of clients that only sporadically
make queries, like web users. each client (like the webserver) grabs a
connection from the pool, runs its transactions, then releases the
connection back to the pool. a pool won't help much if all 100 of
your clients want to make a query at the same time.
your 4 CPU 8GB machine will likely be optimal doing no more than about 8
queries at once. (give or take a few, depending on how many disk drives
in your raids and how much IO concurrency the server can support).
oh, you mentioned MS Windows in there, ok, 8 is optimistic, the optimal
value may be more like 4.
if you have 100 clients that simultaneously want to make queries each 5
minutes, you should consider using some sort of message queueing system,
where your clients send a message to an application service, and the app
server runs as many queue workers as you find are optimal, each of which
reads a message from the queue, processes database requests to satisfy
the message request, and returns the results to the client, then grabs
the next queue entry and repeat....
--
john r pierce N 37, W 122
santa cruz ca mid-left coast
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general