Re: Postgres bulkload without transaction logs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/05/2011 10:42 AM, shouvik basu wrote:

I am trying to perform a bulkload of 1 crore rows using copy of row width 1000, but the transaction logs keeps getting huge space, is there a easy way to do it and avoid the transaction logs


If you're using the built-in streaming replication or an archive_command to replicate this data, you can't avoid the transaction logs; those are how the data gets shipped to the other server.

If you CREATE or TRUNCATE the table you're loading data into as part of a transaction that loads into that, in a single server setup this will avoid creating the transaction log data. For example:

BEGIN;
TRUNCATE TABLE t;
COPY t FROM ...
COMMIT;

That COPY will execute with minimal pg_xlog activity, because if the server crashes in the middle it will just roll back the table truncation.

Another piece of software you may find useful for this case is http://pgbulkload.projects.postgresql.org/

--
Greg Smith   2ndQuadrant US    greg@xxxxxxxxxxxxxxx   Baltimore, MD
Comprehensive and Customized PostgreSQL Training Classes:
http://www.2ndquadrant.us/postgresql-training/


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux