wrong again, Fabio.
PostgreSQL is not coded to manage memory usage in the way you think it
does with work_mem. Here is a quote from Citus about the dangers of
setting work_mem too high. When you consume more memory than is available on your machine you can start to see out of out of memory errors within your
Postgres logs, or in worse cases the OOM killer can start to randomly
kill running processes to free up memory. An out of memory error in
Postgres simply errors on the query you’re running, where as the the OOM
killer in linux begins killing running processes which in some cases
might even include Postgres itself.When you see an out of
memory error you either want to increase the overall RAM
on the machine itself by upgrading to a larger instance OR you want to
decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s
better to decrease work_mem instead of increase since that is the amount of
memory that can be consumed by each process and too many operations are
leveraging up to that much memory.https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/ Regards, Michael Vitale
|