Zitat von "AL-Temimi, Muthana" <muthana.al-temimi@xxxxxxxxxxxxxxxxxxxxx>:
Hello admins, i have postgresql version 9.1 and those my configurations: max_connection=600 shared_buffers=1024M work_mem=4M and all others parameters are default as delivered with postgresql.I installed on Suse Enterprice Linux Server and kernel_shxmem= 2GByte in the sysctl.conf, because if I put 1Gbyte postgresql will never started.The total RAM of the server is 8GB.But when the active connections reach 300 then we get out of memory and I checked the system through "top" and I got 7,8GB used from 8GB and started to swap also. At the same time there are no new connection possible.I have connection pool pgpool II in the front of and the everything with it till now OK. No problem in the system and works fine.Any help will be grateful Best Regards Muthana AL-Temimi M.Sc. Informations- und Kommunikations-Systeme
Hello,have you check what is actually using the memory, eg. which percentage for which processes? A typical postgresql process on my systems use ~ 4M of non-shared memory, so with ~300 connections without pooling you will have ~1,2G just for the processes running. Furthermore work_mem is the limit of memory per sort which you can have many per connection and query. Depending on your workload you can try reducing shared_buffers and work_mem because Postgresql does not need much shared_buffers but relies on the OS cache for most of the time and if your sorts are rare or not that big/critical work_mem at 2M might also be sufficient.
This is explained here in detail and there are also hints how to get around if you really need a massive number of concurrent connections:
https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server Regards Andreas
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature