R: repeated out of shared memory error - not related to max_locks_per_transaction

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Evan,

I have tried shared_buffers from 8gb to 40gb, and max_locks_per_transaction from 128 to 384.

I don’t see relevant differences: errors start more or less after one day and half.

Regards

alfonso

 

Da: Evan Bauer <evanbauer@xxxxxxx>
Inviato: venerdì 20 luglio 2018 16:40
A: Alfonso Moscato <alfonso.moscato@xxxxxxxxxxx>
Cc: Fabio Pardi <f.pardi@xxxxxxxxxxxx>; MichaelDBA <MichaelDBA@xxxxxxxxxxx>; pgsql-admin@xxxxxxxxxxxxxxxxxxxx
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Alfonso,

 

There was a lot of buzz when 9.6 came out around getting the “Out of shared memory” message due to virtual lock space consumption.  I see that you have set a high value (max_locks_per_transaction = 384), but have you tried adjusting it to see if that either increases or decreases the amount of time before the messages appear and you have to restart pg?

 

Regards,

 

- Evan

 

Evan Bauer
eb@xxxxxxxxxxxxx
+1 646 641 2973
Skype: evanbauer



On Jul 20, 2018, at 10:23, Alfonso Moscato <alfonso.moscato@xxxxxxxxxxx> wrote:

 

Michael, Fabio,

moreover, i get the message “Out of shared memory”, not “out of memory”.

Anyway, I can confirm that when errors began  there where more than 10gb of free memory.

Regards

Alfonso

 

Da: Fabio Pardi <f.pardi@xxxxxxxxxxxx> 
Inviato: venerdì 20 luglio 2018 15:57
A: MichaelDBA <MichaelDBA@xxxxxxxxxxx>
Cc: pgsql-admin@xxxxxxxxxxxxxxxxxxxx
Oggetto: Re: repeated out of shared memory error - not related to max_locks_per_transaction

 

Michael,

I think we are talking about 2 different scenarios.

1) the single operation is using more than work_mem -> gets spilled to disk. like: a big sort. That's what i mentioned.

2) there are many many concurrent operations, and one more of them wants to allocate work_mem but the memory on the server is exhausted at that point. -> in that case you will get 'out of memory'. That's what you are referring to.

 

Given the description of the problem (RAM and Postgres settings) and the fact that Alfonso says that "there is a lot of free memory" i think is unlikely that we are in the second situation described here above.


regards,

fabio pardi

 

 

 

 

On 20/07/18 15:28, MichaelDBA wrote:

wrong again, Fabio.  PostgreSQL is not coded to manage memory usage in the way you think it does with work_mem.  Here is a quote from Citus about the dangers of setting work_mem too high.

When you consume more memory than is available on your machine you can start to see out of out of memory errors within your Postgres logs, or in worse cases the OOM killer can start to randomly kill running processes to free up memory. An out of memory error in Postgres simply errors on the query you’re running, where as the the OOM killer in linux begins killing running processes which in some cases might even include Postgres itself.

When you see an 
out of memory error you either want to increase the overall RAM on the machine itself by upgrading to a larger instance OR you want to decrease the amount of memory that work_mem uses. Yes, you read that right: out-of-memory it’s better to decrease work_mem instead of increase since that is the amount of memory that can be consumed by each process and too many operations are leveraging up to that much memory.


https://www.citusdata.com/blog/2018/06/12/configuring-work-mem-on-postgres/

Regards,
Michael Vitale


Friday, July 20, 2018 9:19 AM

Nope Michael,

if 'stuff' gets spilled to disk does not end up in an error. It will silently write a file to disk for the time being and then deleted it when your operation is finished.

period.

Based on your log settings, it might appear in the logs, under 'temporary file created..'.

 

regards,

fabio pardi

 

 

On 20/07/18 15:00, MichaelDBA wrote:

 

Friday, July 20, 2018 9:00 AM

I do not think that is true.  Stuff just gets spilled to disk when the work_mem buffers would exceed the work_mem constraint.  They are not constrained by what real memory is available, hence the memory error!  They will try to get memory even if it is not available as long as work_mem buffers threshold is not reached.

Regards,
Michael Vitale



Friday, July 20, 2018 8:47 AM

work_mem cannot be the cause of it for the simple reason that if the memory needed by your query overflows work_mem, it will spill to disk

 

regards,

fabio pardi

 

 

On 20/07/18 14:35, MichaelDBA wrote:

 

Friday, July 20, 2018 8:35 AM

Perhaps your "work_mem" setting is causing the memory problems.  Try reducing it to see if that alleviates the problem.

Regards,
Michael Vitale


Friday, July 20, 2018 8:32 AM

I would also lookup the definition of shared buffers and effective cache. If I remember correctly you can think of shared buffers as how much memory total PostgreSQL has to work with. Effective cache is how much memory is available for PostgreSQL to run, shared buffers, as well as an estimate of how much memory is available to the OS to cache files in memory. So effective cache should be equal to or larger than shared buffers. Effective cache is used to help with the SQL planning.

Double check the documentation. 

Lance

Sent from my iPad

 

 

 

 


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux