Search Postgresql Archives

Re: Performance Problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday 13 October 2006 01:22, Martijn van Oosterhout wrote:
> >   1) I have a performance problem as I am trying to insert around 60
> >   million rows to a table which is partitioned. So first I copied the
> >   .csv file which contains data, with COPY command to a temp table
> >   which was quick. It took only 15 to 20 minutes. Now I am inserting
> >   data from temp table to original table using insert into org_table
> >   (select * from temp_table); which is taking more than an hour & is
> >   still inserting. Is there an easy way to do this?
>
> Does the table you're inserting into have indexes or foreign keys?
> Either of those slow down loading considerably. One commen workaround
> is to drop the indexes and constraints, load the data and re-add them.

Why do you COPY the data into a temporary table just to do a "insert into 
org_table  (select * from temp_table);" ? Since you're copying ALL records 
anyways, why don't you just copy the data into the "org_table" directly?

Also look for the "autocommit" setting. If autocommit is on, every insert is a 
transaction on it's own - leading to a lot of overhead. Turning autocommit 
off and running the inserts in batches of - say 1000 inserts per transaction 
- will increase speed considerably.


	UC

--
Open Source Solutions 4U, LLC	1618 Kelly St
Phone:  +1 707 568 3056		Santa Rosa, CA 95401
Cell:   +1 650 302 2405		United States
Fax:    +1 707 568 6416


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux