R: repeated out of shared memory error - not related to max_locks_per_transaction

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I experimented a lot with shared buffers, but I haven't seen any difference.
With shared_buffers set to 16 gb I start seeing errors after one day and
half, and with 40gb they start exactly at the same moment.
One would expect that if something is consuming shared memory, setting it to
more than double should make some difference, but this isn't the case.
Alfonso

-----Messaggio originale-----
Da: Campbell, Lance <lance@xxxxxxxxxxxx> 
Inviato: venerdì 20 luglio 2018 14:33
A: Alfonso Moscato <alfonso.moscato@xxxxxxxxxxx>
Cc: pgsql-admin@xxxxxxxxxxxxxxxxxxxx
Oggetto: Re: repeated out of shared memory error - not related to
max_locks_per_transaction

I would also lookup the definition of shared buffers and effective cache.
If I remember correctly you can think of shared buffers as how much memory
total PostgreSQL has to work with.  Effective cache is how much memory is
available for PostgreSQL to run, shared buffers, as well as an estimate of
how much memory is available to the OS to cache files in memory.  So
effective cache should be equal to or larger than shared buffers.  Effective
cache is used to help with the SQL planning.

Double check the documentation.  

Lance

Sent from my iPad

> On Jul 20, 2018, at 6:15 AM, Alfonso Moscato <alfonso.moscato@xxxxxxxxxxx>
wrote:
> 
> We are getting crazy with "out of shared memory" errors, and we can't 
> figure the reason.
> We have postgresql "PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 
> 9.6.9-2.pgdg16.04+1), compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 
> 5.4.0 20160609, 64-bit", the server has 92gb of ram, it is a mixed 
> environment (mostly OLTP, with some DW), with 100 sessions constantly 
> open (a CRM) and some long queries run every half an hour.
> Everything works fine, except that after 1 day and half we start 
> receiving a lot of "out of shared memory" messages.
> I am sure it is not related with the usual max_locks_per_transaction 
> issue, because we have set max_locks_per_transaction to 384, and when 
> we receive these messages we have no more than 50/100 locks totally.
> Restarting the server usually works fine for one day and hal more, and 
> then messages restart.
> Looking at the log, we see that this error starts casually, sometimes 
> on very small queries, returning some kbytes of data.
> We have tried a lot of different configurations. we have tried with 
> pgtune and pgconfig 2.0.
> 
> Currently, we have: 
> max_connections = 200
> shared_buffers = 23GB
> effective_cache_size = 69GB
> maintenance_work_mem = 2GB
> checkpoint_completion_target = 0.9
> wal_buffers = 16MB
> default_statistics_target = 100
> random_page_cost = 4
> effective_io_concurrency = 2
> work_mem = 60293kB
> min_wal_size = 2GB
> max_wal_size = 4GB
> max_worker_processes = 4
> max_parallel_workers_per_gather = 2
> max_locks_per_transaction = 384
> but we tried with work_mem to 130mb, shared_buffer to a maximum fo 
> 40gb, effective_cache to 4gb
> 
> shared memory limits are very big: 
> max number of segments = 4096
> max seg size (kbytes) = 18014398509465599 max total shared memory 
> (kbytes) = 18014398442373116 min seg size (bytes) = 1
> 
> thanks
> 
> 
> 






[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux