Mey Matt, You got a couple solutions, one of which is getting a connection pooler setup between apache(php) and postgres. SQL Relay (sqlrelay.sf.net) could be a common candidate, as I have used it in the past with great results. Another option could be to implement caching on the webserver of common, static data, by using either a database abstraction library such as ADOdb (adodb.sf.net) that can cache queries, or to implement a RAM-based caching solution such as memcached (www.danga.com/memcached). Some people tell me that persistent connections are actually bad, and to always use pg_connect. I cannot vouch for this approach, but you might want to try it to see if it helps you in your particular situation. I'm rolling out a site that gets >5M page views daily in a couple weeks, so this should be a good opportunity to get some detailed real-world performance metrics. - Mitch On Wed, 29 Dec 2004 10:05:18 -0500, Matthew Terenzio <webmaster@xxxxxxxxxxxxxxx> wrote: > After years of running apache-php-postgres with no issues, I'm suddenly > receiving the cannot connect to postgres - too many clients already > error on my script pages. > > 1. Does everyone ALWAYS have Apache max connections lower than postgres > max clients? I tried this but the problem still returned. > > 2.I'm using pg_pconnect(). > > 3. Traffic is slightly higher lately but not that high. > > 4. I'm thinking about apache changing apache maxrequestsperchild from 0 > to maybe 10 or something to periodically kill apache children, but I > can't see why the maxclients in apache would overload postgres if they > are both the same number. It is hard fro me to believe there are > actually that many simultaneous users of this system, so for some > reason the connections are remaining open and unused. > > Any wisdom out there? > > Matt > > ---------------------------(end of broadcast)--------------------------- > TIP 1: subscribe and unsubscribe commands go to majordomo@xxxxxxxxxxxxxx >