Turn off the OOM killer so you would get a
nicer me ssage in PG log file instead of crashing the PG service.
vm.overcommit_memory=2
Jean-Christophe Boggio wrote on 10/13/2023 9:06 AM:
Hello,
On my dev laptop, I have ~40GB free RAM. When launching a heavy
calculation in PostgreSQL (within a stored procedure), it consumes as
much memory as is available and then gets killed by OOM. There is only
one connected session.
I have the following settings, which look reasonable (to me):
shared_buffers = 512MB # min 128kB
#huge_pages = try # on, off, or try
temp_buffers = 512MB # min 800kB
#max_prepared_transactions = 0 # zero disables the feature
work_mem = 1GB # min 64kB
#hash_mem_multiplier = 1.0 # 1-1000.0 multiplier on hash
table work_mem
maintenance_work_mem = 1GB # min 1MB
#autovacuum_work_mem = -1 # min 1MB, or -1 to use
maintenance_work_mem
#logical_decoding_work_mem = 64MB # min 64kB
#max_stack_depth = 2MB # min 100kB
#shared_memory_type = mmap # the default is the first
option
dynamic_shared_memory_type = posix # the default is the first
option
#temp_file_limit = -1 # limits per-process temp file
space
This is PostgreSQL 14.7 running on Ubuntu 23.04
What can I do to prevent the crash?
Thanks for your help,
|