Search Postgresql Archives

Re: Size estimation of postgres core files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It doesn't write out all of RAM, only the amount in use by the
particular backend that crashed (plus all the shared segments attached
by that backend, including the main shared_buffers, unless you disable
that as previously mentioned).

And yes, it can take a long time to generate a large core file.

--
Andrew (irc:RhodiumToad)

Based on the Alvaro's response, I thought it is reasonably possible that that *could* include nearly all of RAM, because that was my original question.  If shared buffers is say 50G and my OS has 1T, shared buffers is a small portion of that.  But really my question is what should we reasonably assume is possible - meaning what kind of space should I provision for a volume to be able to contain the core dump in case of crash?  The time of writing the core file would definitely be a concern if it could indeed be that large.

Could someone provide more information on exactly how to do that coredump_filter?

We are looking to enable core dumps to aid in case of unexpected crashes and wondering if there are any recommendations in general in terms of balancing costs/benefits of enabling core dumps.

Thank you!
Jeremy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux