Search Postgresql Archives

Re: ERROR: out of memory DETAIL: Failed on request of size ???

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 27 Listopad 2013, 22:39, Brian Wong wrote:
>> Date: Fri, 22 Nov 2013 20:11:47 +0100
>> Subject: Re:  ERROR: out of memory DETAIL: Failed on request of
>> size ???
>> From: tv@xxxxxxxx
>> To: bwong64@xxxxxxxxxxx
>> CC: bricklen@xxxxxxxxx; pgsql-general@xxxxxxxxxxxxxx
>>
>> On 19 Listopad 2013, 5:30, Brian Wong wrote:
>> > I've tried any work_mem value from 1gb all the way up to 40gb, with no
>> > effect on the error.  I'd like to think of this problem as a server
>> > process memory (not the server's buffers) or client process memory
>> issue,
>> > primarily because when we tested the error there was no other load
>> > whatsoever.  Unfortunately,  the error doesn't say what kinda memory
>> ran
>> > out.
>>
>> Hi Brian,
>>
>> first of all, please don't top-post, especially if the previous response
>> used bottom-post. Thank you ;-)
>>
>> Regarding the issue you're seeing:
>>
>> * Increasing work_mem in hope that it will make the issue go away is
>> pointless. In case work_mem is too low, PostgreSQL will automatically
>> spill the data to disk (e.g. it won't do a sort in memory, but will do a
>> on-disk merge sort). It will never fail, and messages "failed on request
>> of size" is actually coming from malloc, when requesting another chunk
>> of
>> memory from the OS. So you're hitting a OS-level memory limit.
>
> After changing the shared_buffers setting to 200MB, the developer has
> confirmed that the Out Of Memory error no longer happens.  So thanks
> folks.
>
> Playing with work_mem was out of desperation.  Postgresql simply giving
> the "Out of memory" error wasn't informative enough about the problem.
> For example, is it the server buffer, the server process, or the client
> process that's having a problem?

PostgreSQL is unable to give you a more detailed information about the
cause because it simply does not have it. You're hitting some limit set at
the kernel level, so PostgreSQL calls malloc() and kernel responds with
NULL. What details would you expect from PostgresQL in that case than
simple statement 'we tried to allocated X bytes and it failed'?

Also, this kind of errors tends to be 'random' i.e. it the allocation
error may happen at different places every time you run the query,
depending on how many backends are running etc.

The fact that decreasing shared buffers to 200 MB made the problem go away
for now only suggests this is a problem with some kernel limit (ulimit,
overcommit, ...). It also means you haven't solved it and it will likely
happen again in the future, e.g. if the amount of data grows and you'll
hit the limit again. Find and fix the actual issue.

Have you checked the ulimit / overcommit as I asked you to?

>> Note: AFAIK the only operation that does not spill to disk, and may fail
>> with OOM-like errors is hash aggregate. But by increasing the work_mem
>> you're actually encouraging PostgreSQL to do this planning error.
>>
>> I see the query you're running is doing MAX() so it might be hitting
>> this
>> issue. How much data are you dealing with? How many groups are in the
>> result?
>>
>> * Setting shared buffers to 18GB is almost certainly a bad choice. I'm
>> yet
>> to see a real-world database where shared_buffers over 8GB actually make
>> a
>> measurable difference. More is not always better, and you're actually
>> reserving memory that can't be used for work_mem (so you're making the
>> issue worse). Please, decrease shared_buffers to e.g. 4GB, then try to
>> increase it and measure the performance difference.
>>
> If you look at the documentation about how to configure shared_buffers, it
> is very unclear to me how I can maximize performance by allocating as much
> memory to the buffer as possible.  On one hand, the documentation says I
> shouldn't go high on the shared_buffers setting.  On the other hand, the
> more memory you allocate to the buffers, the better the performance is
> supposedly.  So at least as of 9.1, this is annoying.  I heard that
> starting from 9.2, this behavior changed dramatically?

I'm not sure I understand what you're trying to say. It's true the
documentation does not give perfectly clear instructions on how to set
optimal shared_buffers. The reason is very simple - there's no such value,
optimal for all workloads, operating systems and PostgreSQL versions. The
assumption that the more is better is incorrect for several reasons.

There's a more detailed wiki page about this:

    https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

The 200MB you've set seems way too low in your case, and aside from making
it slower than necessary, you most probably haven't fixed the actual
issue.

regards
Tomas



-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux