Dne 22 Říjen 2014, 0:25, Montana Low napsal(a): > I'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel > 3.16.3. > I receive numerous Error: out of memory messages in the log, which are > aborting client requests, even though there appears to be 23GB available > in > the OS cache. > > There is no swap on the box. Postgres is behind pgbouncer to protect from > the 200 real clients, which limits connections to 32, although there are > rarely more than 20 active connections, even though postgres > max_connections is set very high for historic reasons. There is also a 4GB > java process running on the box. > > > > > relevant postgresql.conf: > > max_connections = 1000 # (change requires restart) > shared_buffers = 7GB # min 128kB > work_mem = 40MB # min 64kB > maintenance_work_mem = 1GB # min 1MB > effective_cache_size = 20GB > > > > sysctl.conf: > > vm.swappiness = 0 > vm.overcommit_memory = 2 This means you have 'no overcommit', so the amount of memory is limited by overcommit_ratio + swap. The default value for overcommit_ratio is 50% RAM, and as you have no swap that effectively means only 50% of the RAM is available to the system. If you want to verify this, check /proc/meminfo - see the lines CommitLimit (the current limit) and Commited_AS (committed address space). Once the committed_as reaches the limit, it's game over. There are different ways to fix this, or at least improve that: (1) increasing the overcommit_ratio (clearly, 50% is way too low - something 90% might be more appropriate on 30GB RAM without swap) (2) adding swap (say a small ephemeral drive, with swappiness=10 or something like that) Tomas -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance