Search Postgresql Archives

Re: is there a way to firmly cap postgres worker memory consumption?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Steve Kehlet <steve.kehlet@xxxxxxxxx> writes:
> Thank you. For some reason I couldn't get it to trip with "ulimit -d
> 51200", but "ulimit -v 1572864" (1.5GiB) got me this in serverlog. I hope
> this is readable, if not it's also here:

Well, here's the problem:

>         ExprContext: 812638208 total in 108 blocks; 183520 free (171
> chunks); 812454688 used

So something involved in expression evaluation is eating memory.
Looking at the query itself, I'd have to bet on this:

>            ARRAY_TO_STRING(ARRAY_AGG(MM.ID::CHARACTER VARYING), ',')

My guess is that this aggregation is being done across a lot more rows
than you were expecting, and the resultant array/string therefore eats
lots of memory.  You might try replacing that with COUNT(*), or even
better SUM(LENGTH(MM.ID::CHARACTER VARYING)), just to get some definitive
evidence about what the query is asking to compute.

Meanwhile, it seems like ulimit -v would provide the safety valve
you asked for originally.  I too am confused about why -d didn't
do it, but as long as you found a variant that works ...

			regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux