Search Postgresql Archives

Fwd: Out of Memory Error on Insert

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Thanks, Craig.
> 
> There are no triggers on the tables and the only constraints are the
> primary keys.
> 
> I am thinking that the problem may be that I have too many full self
> joins on the simple_group  table.  I am probably getting a
> combinatorial explosion when postgres does cross joins on all the
> derived tables.  I think I need to redesign the processing so that I
> don't need to do so many joins.
> 
> However, I am still curious as to why I am getting an out of memory
> error.  I can see how the performance might be terrible on such a
> query but I am surprised that postgres doesn't start using the disk at
> some point to reduce memory usage.   Could it be that postgres tries
> to keep temp tables in memory?

> On Tue, Oct 18, 2011 at 3:56 AM, Craig Ringer <ringerc@xxxxxxxxxxxxx> wrote:
>> On 10/18/2011 02:52 PM, Mark Priest wrote:
>>> 
>>> I am getting an Out of Memory error in my server connection process
>>> while running a large insert query.
>>> 
>>> Postgres version: "PostgreSQL 8.2.16 on i686-pc-mingw32, compiled by
>>> GCC gcc.exe (GCC) 3.4.2 (mingw-special)"
>>> OS: Windows 7 Professional (v.6.1, build 7601 service pack 1)
>>> The OS is 64 bit but the postgres app is a 32-bit app and I run out of
>>> memory and the server process crashes as soon as I hit 2 GB of memory.
>>>  I assume that is because that is the limit for 32-bit apps.
>>> My client connection is via JDBC in case that is important.
>> 
>> 
>> You're on an old 8.2 subrelease, 8.2.16 from 2010-03-15 that lacks important
>> bug-fixes; the latest 8.2 is 8.2.22 from 2011-09-26. See:
>> 
>>  http://www.postgresql.org/docs/8.2/static/release.html
>> 
>> More to the point, you're on 8.2 on Windows! I strongly recommend moving to
>> a newer release if you can, as the newer releases are significantly improved
>> in performance and reliability on Windows.
>> 
>> 
>> For this specific issue, the only thing that comes to mind is whether you
>> have any AFTER INSERT triggers on this table, or whether you have any
>> DEFERRABLE constraints (irrespective of whether or not they're INITIALLY
>> DEFERRED or not). PostgreSQL must keep track of these to execute them at the
>> end of the transaction, and currently doesn't support writing this list to
>> disk when it gets too big so it can eventually fill the backend's available
>> RAM on huge inserts.
>> 
>> If your issue is with a constraint, a workaround is to drop the constraint,
>> do the insert, then re-establish the constraint and commit the transaction.
>> 
>> If it's a trigger, that's trickier. Do the insert in smaller batches if you
>> can, or see if you can disable the trigger, do the inserts, then do all its
>> work in one go at the end.
>> 
>> --
>> Craig Ringer
>> 

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux