Re: pg_restore failing with "ERROR: out of memory"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: Re: pg_restore failing with "ERROR: out of memory"
Yes, it kind of needs to be data only since I am pulling from a slonized database.  My experience has been if you don’t load the schema first with a schema from slony1_extract_schema.sh, we end up with all the slony triggers and crap in the dump.  If there is a better way of doing this, I’m definitely all ears.

Aaron


On 3/19/08 3:06 PM, "Tom Lane" <tgl@xxxxxxxxxxxxx> wrote:

Aaron Brown <abrown@xxxxxxxxxxxx> writes:
> I’m attempting to do something that should be a trivially simple task.  I
> want to do a data only dump from my production data in the public schema and
> restore it on another machine.

Does it really need to be data-only?  A regular schema+data dump usually
restores a lot faster.

Your immediate problem is probably that it's running out of memory for
pending foreign-key triggers.  Even if it didn't run out of memory, the
ensuing one-tuple-at-a-time checks would take forever.  You'd be better
off dropping the FK constraint, loading the data, and re-creating the
constraint.

There's further discussion of bulk-loading tricks in the manual:
http://www.postgresql.org/docs/8.2/static/populate.html

                        regards, tom lane




-------------------------------------------------------
Aaron Brown, Systems Engineer
BzzAgent, Inc. | www.bzzagent.com
abrown@xxxxxxxxxxxx | 617.451.2280
-------------------------------------------------------


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux