You can definitely overload most systems by trying to start too many connections at once. (This is actually true for most relational databases.) We used to see this scenario when we'd start a bunch web servers that used preforked apache at the same time (where each fork had its own connection). One temporary work around is to slow start the web cluster - bringing up one at a time and giving them a chance to complete.
You can kill the processes by looking for them on the unix prompt instead of inside the database. ( 'ps -fu postgres' ) You can see where they are coming from using something like 'netstat -an | grep 5432' (or whatever port your database is listening on.
pgbouncer is a great solution for managing large connection sets that come and go often. It will really help. You can run it directly on each of the web servers or client systems, you can run it in between on its own system(s), or you can run it on the database server (if necessary). You'll want to tune it so it only opens as many connections as you expect to be running concurrent queries. It takes a little experimenting to figure out the optimum settings. If you start pgbouncer first, you can bring up lots of concurrent connections to pgbouncer, and you will hardly notice it on the database.
Trying to stay current with the latest patches and releases is a lot of work and little appreciated. However, in the long run it is far easier to tackle this incrementally than trying to do one big upgrade - skipping a bunch of releases - every now and then. This is true for the OS as well as the Database. It is not always possible to do an upgrade, and when it is, it can take months of planning. Hopefully you aren't in that situation. Building processes that make these patches and upgrades routine is much saner if you can. One nice thing about having pgbouncer in between the application and the database is you can reconfigure pgbouncer to talk to a different database and you won't have to touch the application code at all. Sometimes that is easier to accomplish politically. Swapping out a database which is running behind a cluster of application servers with minimal risk and minimal downtime is a technical as well as political challenge, but worth it when you can get on the latest and greatest. Good Luck!
On Tue, Dec 22, 2015 at 3:09 AM, Pavel Stehule <pavel.stehule@xxxxxxxxx> wrote:
Hi2015-12-22 8:59 GMT+01:00 Artem Tomyuk <admin@xxxxxxxxxxxxxx>:Hi.I've noticed huge decrease in performance.During this in htop i see a lot (200 - 300) of connections in state "startup", each of them eats 3-3% of CPU time. This processes are not visible in pg_stat_activity so i cant understand what they are doing, and i cant kill them. I cant see the bottleneck in Disk IO to. The logs of postgres says nothing to. I am confused.....What can be the cause of huge amount of "startup" connections....Maybe its better to start use connection pooler such as pgbouncer?Thanks a lot.What is your max_connections? Can you ran "perf top" ? What is there.Too high number can enforce system overloading. You cannot to see these connections in pg_stat_activity because the process in this state isn't fully initialized.There was lot of bugfix releases after 9.1.2 - currently there is PostgreSQL 9.2.19. Try to upgrade first.RegardsPavelPS.Server config is:2 * Intel Xeon 2660 CPU with 64 gigs of RAM.Hardware RAID10.Centos 6.6, PostgreSQL 9.1.2