In response to "Deshpande, Yogesh Sadashiv (STSD-Openview)" <yogesh-sadashiv.deshpande@xxxxxx>: > Hello , > > We have a setup where in there are around 100 process running in parallel every 5 minutes and each one of them opens a connection to database. We are observing that for each connection , postgre also created on sub processes. We have set max_connection to 100. So the number of sub process in the system is close to 200 every 5 minutes. And because of this we are seeing very high CPU usage. This does not follow logically, in my experience. We have many servers that have over 300 simultaneous connections, and the connections themselves do not automatically create high CPU usage. Unless of course, there is an issue with the particular OS you're using, which you didn't mention. > We need following information > > 1. Is there any configuration we do that would pool the connection request rather than coming out with connection limit exceed. Use pgpool or pgbouncer. > 2. Is there any configuration we do that would limit the sub process to some value say 50 and any request for connection would get queued. Set the max connection and handle the connection retry in your application. > Basically we wanted to limit the number of processes so that client code doesn't have to retry for unavailability for connection or sub processes , but postgre takes care of queuing? pgpool and pgbouncer handle some of that, but I don't know if they do exactly everything that you want. Probably a good place to start, though. -- Bill Moran http://www.potentialtech.com http://people.collaborativefusion.com/~wmoran/ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general