Search Postgresql Archives

Alternative to drop index, load data, recreate index?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When loading very large data exports (> 1 million records) I have found it necessary to use the following sequence to achieve even reasonable import performance:

1. Drop all indices on the recipient table
2. Use "copy recipient_table from '/tmp/input.file';"
3. Recreate all indices on the recipient table

However, I now have tables so large that even the 'recreate all indices' step is taking too long (15-20 minutes on 8.2.4).

I am considering moving to date-based partitioned tables (each table = one month-year of data, for example). Before I go that far - is there any other tricks I can or should be using to speed up my bulk data loading?

Thanks,
jason

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux