"Alfonso Moscato" <alfonso.moscato@xxxxxxxxxxx> writes: > We are getting crazy with "out of shared memory" errors, and we can't figure > the reason. I don't think any of the advice posted so far has anything to do with your problem --- certainly, fooling with work_mem is unrelated. PG shared memory is a fixed-size arena (for any one setting of shared_buffers, max_connections, max_locks_per_transaction, and a couple other variables) and most of its contents are pre-allocated at postmaster start. What you are describing sounds like a long-term leak of additional, post-startup shmem allocations, eventually running out of the available slop in the shmem arena. work_mem, and other user-visible knobs, have nothing to do with this because those control allocations in process private memory not shmem. I'm pretty sure that the *only* post-startup shmem allocations in the core code are for lock table entries. However, if you're running any non-core extensions, it's possible that one of them does such allocations and has a logic error that results in a shmem leak. As an amelioration measure, you could raise max_locks_per_transaction, which will increase the arena size without actually eating any additional space immediately at startup. That might not cure the problem, but at least it would increase the interval at which you have to restart the server. As for real solutions, I'd first look harder at the question of how many lock table entries you need. The fact that you only see a few dozen active entries when you look (after a failure) doesn't prove a thing about what the max transient requirement is. Do you have any applications that touch a whole lot of tables in a single transaction? Are you using any user-defined (advisory) locks, and if so what's the usage pattern like for those? The "bug in an extension" theory also needs investigation. regards, tom lane