Phoenix Kiula wrote: > > > Well based on some past posts, I looked into my pg_log stuff and found > a number of these lines: > > > [---------------- > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > ----------------] > > > Which suggests that our guess of running out of connections is the right one. No, you're not running out of "connections". You are, however, running over some kernel limit. > So, we have three options (to begin with) -- > > 1. Increase the number of max_connections. This seems to be a voodoo > art and a complex calculation of database size (which in our case is > difficult to predict; it grows very fast), hardware, and such. I > cannot risk other apps running on this same machine. No. You are not yet reaching max_connections, that would give you an error message that actually says so. This message indicates that you have an ulimit for the account that postgresql runs under that limits some resources - in this case most likely the number of processes. And this limit is not "compatible" with your settings for max_connections. You need to find this ulimit, and at least change it, or even remove it. > 2. Use connection pooling. I've found pgpool2 and pgbouncer from the > Skype group. Does anyone have experience using either? The latter > looks good, although we're usually skeptical about connection pooling > in general (or is that just the mysqli_pconnect() hangover?) Connection pooling "in general", is something that pretty much *every* larger app will always use. It may be implemented in the app (something which has often been troublesome in PHP solutions, but it's certainly the norm for Java or .Net apps) or in middleware like pgpool or pgbouncer. There should be no need to be sceptical about it in general ;-) Can't speak for either of those apps specifically, as I haven't used them in production. > 3. Use caching of queries. Memcache comes recommended, but there's a > discussion as recently as Jan 2007 on this list about race conditions > and such (most of which I don't quite understand) which cautions > against its use. We do expect plenty of transactions and if something > that has been updated is not very correctly and promptly invalidated > in the cache, it has huge business repercussions for us. There are ways to do this, but if you can't just use timeouts to expire from the cache, things can become pretty complicated pretty fast. But perhaps you can isolate some kinds of queries that can be cached for <n> minutes, and keep the rest without caching? //Magnus ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match