Search Postgresql Archives

Re: ERROR: too many dynamic shared memory segments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank You Thomas!



--
regards,
Jakub Glapa

On Thu, Dec 7, 2017 at 10:30 PM, Thomas Munro <thomas.munro@xxxxxxxxxxxxxxxx> wrote:
On Tue, Dec 5, 2017 at 1:18 AM, Jakub Glapa <jakub.glapa@xxxxxxxxx> wrote:
> I see that the segfault is under active discussion but just wanted to ask if
> increasing the max_connections to mitigate the DSM slots shortage is the way
> to go?

Hi Jakub,

Yes.  In future releases this situation will improve (maybe we'll
figure out how to use one DSM segment for all the gather nodes in your
query plan, and maybe it'll be moot anyway because maybe we'll be able
to use a Parallel Append for queries like yours so that it uses the
same set of workers over all the child plans instead of the
fork()-fest you're presumably seeing).  For now your only choice, if
you want that plan to run, is to crank up max_connections so that the
total number of concurrently executing Gather nodes is less than about
64 + 2 * max_connections.  There is also a crash bug right now in the
out-of-slots case as discussed, fixed in the next point release, but
even with that fix in place you'll still need a high enough
max_connections setting to be sure to be able to complete the query
without an error.

Thanks for the report!

--
Thomas Munro
http://www.enterprisedb.com


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux