On Mon, 2007-09-10 at 17:06 -0700, Jason L. Buberel wrote: > When loading very large data exports (> 1 million records) I have > found it necessary to use the following sequence to achieve even > reasonable import performance: > > 1. Drop all indices on the recipient table > 2. Use "copy recipient_table from '/tmp/input.file';" > 3. Recreate all indices on the recipient table > > However, I now have tables so large that even the 'recreate all > indices' step is taking too long (15-20 minutes on 8.2.4). > > I am considering moving to date-based partitioned tables (each table = > one month-year of data, for example). Before I go that far - is there > any other tricks I can or should be using to speed up my bulk data > loading? If you create the indexes with CONCURRENTLY, then you can write to the tables while the indexes are being created. That might help reduce your downtime window. Regards, Jeff Davis ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend