On Sun, Apr 14, 2019 at 05:19:50PM -0400, Jeff Janes wrote:
On Sun, Apr 14, 2019 at 4:51 PM Gunther <raj@xxxxxxxx> wrote:
For weeks now, I am banging my head at an "out of memory" situation.
There is only one query I am running on an 8 GB system, whatever I try,
I get knocked out on this out of memory.
Is PostgreSQL throwing an error with OOM, or is getting killed -9 by the
OOM killer? Do you get a core file you can inspect with gdb?
You might want to see the query, but it is a huge plan, and I can't
really break this down. It shouldn't matter though. But just so you can
get a glimpse here is the plan:
Insert on businessoperation (cost=5358849.28..5361878.44 rows=34619 width=1197)
-> Unique (cost=5358849.28..5361532.25 rows=34619 width=1197)
Maybe it is memory for trigger or constraint checking, although I don't
know why that would appear instantly. What triggers or constraints do you
have on businessoperation?
Yeah, that would be my guess too. If I had to guess, something likely gets
confused and allocates memory in es_query_ctx instead of the per-tuple
context (es_per_tuple_exprcontext).
Triggers, constraints and expr evaluation all seem like a plausible
candidates. It's going to be hard to nail the exact place, though :-(
What if you just run the SELECT without the INSERT? Or insert into a temp
table rather than into businessoperation? And if that doesn't crash, what
if you then insert to businessoperation from the temp table?
Yeah. What's the schema of "businessoperation"? Anything special about
it? Triggers, expression indexes, check constraints, ...
Gunther, you mentioned you build postgres from sources. Would it be
possible to add some sort of extra debugging to see where the memory is
allocated from? It's a bit heavy-handed, though.
Or maybe splitting es_query_ctx into smaller contexts. That might be
easier to evaluate than sifting throuht god-knows-how-many-gbs of log.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services