Phoenix Kiula wrote: > On 19/08/07, Phoenix Kiula <phoenix.kiula@xxxxxxxxx> wrote: [... ] > Well based on some past posts, I looked into my pg_log stuff and found > a number of these lines: > > > [---------------- > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > LOG: could not fork new process for connection: Resource temporarily > unavailable > ----------------] > > > Which suggests that our guess of running out of connections is the right one. > > So, we have three options (to begin with) -- > > 1. Increase the number of max_connections. This seems to be a voodoo > art and a complex calculation of database size (which in our case is > difficult to predict; it grows very fast), hardware, and such. I > cannot risk other apps running on this same machine.sql this error is a sign that the OS(!) is running out of resources(or at least won't allow pg to fork another process) - either you hit an ulimit for the user postgresql runs under or you need to flip some kernel setting to increase the number of processes. increasing max_connections wil NOT help because you are not even hitting the current one yet ... > > 2. Use connection pooling. I've found pgpool2 and pgbouncer from the > Skype group. Does anyone have experience using either? The latter > looks good, although we're usually skeptical about connection pooling > in general (or is that just the mysqli_pconnect() hangover?) pgbouncer works quite fine here. Stefan ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match