Search Postgresql Archives

Re: copy a large table raises out of memory exception

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 10 Dec 2007, A. Ozen Akyurek wrote:

> We have a large table (about 9,000,000 rows and total size is about 2.8 GB)
> which is exported to a binary file.

How was it exported? With "COPY tablename TO 'filename' WITH BINARY"?

"The BINARY key word causes all data to be stored/read as binary
format rather than as text. It is somewhat faster than the normal
text mode, but a binary-format file is less portable across machine
architectures and PostgreSQL versions."
	http://www.postgresql.org/docs/8.2/static/sql-copy.html
Maybe you are bitten by this "less portable".

> When we run "copy tablename from filepath" command, (...) and
> postgre raises exception "out of memory".

I'd try to use pg_dump/pg_restore in custom format, like this:
	pg_dump -a -Fc -Z1 -f [filename] -t [tablename] [olddatabasename]
	pg_restore -1 -a -d [newdatabasename] [filename]

Regards
Tometzky
-- 
...although Eating Honey was a very good thing to do, there was a
moment just before you began to eat it which was better than when you
were...
                                                      Winnie the Pooh

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux