Hi, I have been using table 17-2, Postgres Shared Memory Usage (http://www.postgresql.org/docs/9.1/interactive/kernel-resources.html) to calculate approximately how much memory the server will use. I'm using Postgres 9.1 on a Linux 2.6 (RHEL 6) 64bit system, with 8GB RAM. Database is approximately 5GB, and is a mixture of read/write. Postgres is occasionally being killed by the linux oom-killer. I am trying to understand how much memory postgres could use, and how to change the configuration to bring it down to a level that won't get it killed. Key configuration values are: max_connections = 350 shared_buffers = 4GB temp_buffers = 24MB max_prepared_transactions = 211 work_mem = 16MB maintenance_work_mem = 131MB wal_buffers = -1 wal_keep_segments = 128 checkpoint_segments = 64 effective_cache_size = 4GB autovacuum_max_workers = 4 which I have interpreted to be: max_locks_per_transaction = 64 max_connections = 350 autovacuum_max_workers =4 max_prepared_transactions = 211 (I've since realised this can be 0; I use prepared statements, not 2PC) shared_buffers = 4294967296 wal_block_size = 8192 wal_buffers = 16777216 (actually, -1, but following the documentation of max(16MB, shared_buffers/32) it should be 16MB). and wal segment size = 16777216, block_size = 8192 And using the equations on the kernel resources page, I get: Connections = 6,678,000 = (1800 + 270 * max_locks_per_transaction) * max_connections = (1800 + 270 * 64) * 350 Autovacuum Workers = 76,320 = (1800 + 270 * max_locks_per_transaction) * autovacuum_max_workers = (1800 + 270 * 64) * 4 Prepared Transactions = 3,808,550 = (770 + 270 * max_locks_per_transaction) * max_prepared_transactions = (770 + 270 * 64) * 211 Shared Disk Buffers = 36,077,725,286,400 = (block_size + 208) * shared_buffers = (8192 + 208) * 4294967296 = ~33TB WAL Buffers = 137,573,171,200 = (wal_block_size + 8) * wal_buffers = (8192 + 8) * 16777216 = ~128GB Fixed Space Requirements = 788,480 Overall = 36,215,309,808,950 bytes (~33.2 TB!) 33.2TB doesn't seem right, and while I know the equations are just approximations, this seems too much. What have I done wrong? I read a prior thread about this on the pgsql lists which seemed to indicate the equations for shared disk and wall buffers should be divided by the block_size 8192, and looking at it closer, wonder if the equation for both should just be overhead + buffer? Also what is the relationship between memory and work_mem (set to 16M in my case). I understand work_mem is per sort, and in most cases our queries only have a single sort. Does this mean an additional 16M per sorting client (350 * 16M = 5.6GB), or presumably it only uses the work memory as it needs it (i.e. does it preallocate all 16M for each sort, or on an as-needed basis depending on the size of sorted data?) Are there any other ways to calculate the worst case memory usage of a given postgres configuration? My gut feeling is to reduce shared_buffer to 1GB or less and reduce connections to ~150-200 (to reduce worst case work_mem impact). Kind Regards, Mike -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general