This error occurred a couple of days ago and it is unlikely to repeat as
it was a series of exams being done in moodle with over 500 students.
it was a series of exams being done in moodle with over 500 students.
However I'd like to tune Postgres to see if it can be prevented.
First of all, we did not run out of memory within the OS. This system is monitored by cacti and
Real Used memory never exceeded 30 GB on a server having 64 GB ram plus some swap.
Real Used memory never exceeded 30 GB on a server having 64 GB ram plus some swap.
I can also see in the messages log, the Linux kernel OOM killer was never triggered.
The postgres user running the DB has no ulimits on memory which might apply.
Here is the entry in the Postgres log:
<2020-12-19 13:19:29 GMT>LOG: could not fork new process for connection: Cannot allocate memory
TopMemoryContext: 136704 total in 13 blocks; 6688 free (4 chunks); 130016 used
smgr relation table: 24576 total in 2 blocks; 13904 free (4 chunks); 10672 used
TopTransactionContext: 8192 total in 1 blocks; 6152 free (5 chunks); 2040 used
<2020-12-19 13:19:29 GMT>LOG: could not fork new process for connection: Cannot allocate memory
TransactionAbortContext: 32768 total in 1 blocks; 32728 free (0 chunks); 40 used
Portal hash: 8192 total in 1 blocks; 1672 free (0 chunks); 6520 used
PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
My conclusion is this memory error must be related to limits within Postgres tunables, or perhaps something like SHM memory, which we have not set up in the sysctl.conf , so this is just a default amount.
The values we have in postgres.conf are mostly suggested by pgtune script. The one exception being max_connections=500, which was increased to try to avoid bottlenecks being reached with Apache client numbers, and gradually over time as we saw we still had lots of memory available with a higher connection limit.
TopMemoryContext: 136704 total in 13 blocks; 6688 free (4 chunks); 130016 used
smgr relation table: 24576 total in 2 blocks; 13904 free (4 chunks); 10672 used
TopTransactionContext: 8192 total in 1 blocks; 6152 free (5 chunks); 2040 used
<2020-12-19 13:19:29 GMT>LOG: could not fork new process for connection: Cannot allocate memory
TransactionAbortContext: 32768 total in 1 blocks; 32728 free (0 chunks); 40 used
Portal hash: 8192 total in 1 blocks; 1672 free (0 chunks); 6520 used
PortalMemory: 0 total in 0 blocks; 0 free (0 chunks); 0 used
My conclusion is this memory error must be related to limits within Postgres tunables, or perhaps something like SHM memory, which we have not set up in the sysctl.conf , so this is just a default amount.
The values we have in postgres.conf are mostly suggested by pgtune script. The one exception being max_connections=500, which was increased to try to avoid bottlenecks being reached with Apache client numbers, and gradually over time as we saw we still had lots of memory available with a higher connection limit.
Which tunables do you need to see?