Connect. Sharing + Postgres: More than 1000 postmasters produce 7 0.000 context switches

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, 
we installed a new Postgres 7.4.0 on a Suse 9 system, which is running into
troubles with the system load. 
As it seems to be more a php connection sharing problem than postgres 
( as we discussed on the pg-general list ), i do a updated crosspost here.

The database/ldap server is used as a part of an extranet , based on 
Apache+PHP and has besides a ldap server no services running. The system has

dual xeon 2ghz and 2GB RAM.
While migrating all applications from 2 other postgres7.2 servers to the new
one, 
we became heavy load problems. 
At the beginning there where problems with to much allocated shared memory, 
as the system was swapping 5-10 mb / sec . So we now reconfigured the 
shared_buffers to 2048, which means 16mb overall , seems to be somehow low, 
but I can test it with higher values again next time.
We corrected higher values from sort_mem and vacuum_mem back to sort_mem=512
and
 vacuum_mem=8192 , too, to reduce memory usage, although we have
 kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .

Currenty i have limited the max_connections to 800, because every larger
value results in
a system load to 60+ and at least 20.000 context switches.

So much about the postgres background , now talk about connection pooling:

My problem is, that our apache produces much more than 800 open connections,

because we are using > 15 diff. databases and apache seems to keep
connections to every 
database open , the same httpd-process has connected before.
Currently , at low-traffic time, i have 44 httpds running and 293
postmasters, which
are nearly _all_ idle , a "ps ax" gives me at maximum 2 or 3 Selects, all
others are waiting.

For now i solved it in a very dirty way, i limited the number and the
lifetime 
of each httpd process with those values :
 MaxKeepAliveRequests 10
 KeepAliveTimeout 2 
 MaxClients 100 
 MaxRequestsPerChild 300
 
We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
[PostgresSQL]
; Allow or prevent persistent links.
pgsql.allow_persistent = On
; Maximum number of persistent links.  -1 means no limit.
pgsql.max_persistent = -1
; Maximum number of links (persistent+non persistent).  -1 means no limit.
pgsql.max_links = -1

We are now running for days with an extremly unstable database backend...
Are 1.000 processes the natural limit on a linux based postgresql ?
How can I realize a more efficient connection pooling/reusing, or is the 
connection pooling php offers here more bad than good , because the
connections
of the large number of databases cannot be shared in a efficient way ?

Maybe it is advisable just to share the connections of the "main"
database(s), 
and to pg_connect the other running databases ?

Is it enough to convert the "pg_pconnect" in every script to "pg_connect" by
setting "pgsql.allow_persistent = Off" , or is this another effect ? 
I tested this, but at the moment i am in low-traffic-hours, so i cannot 
say anything pro or cons about this test.

thanks a lot for help and every idea is very welcome,
Andre

-- 
PHP Database Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php


[Index of Archives]     [PHP Home]     [PHP Users]     [Postgresql Discussion]     [Kernel Newbies]     [Postgresql]     [Yosemite News]

  Powered by Linux