Hi, I've got a pg database, and a batch process that generates some metadata to be inserted into one of the tables. Every 15 minutes or so, the batch script re-calculates the meta data (600,000 rows), dumps it to file, and then does a TRUNCATE table followed by a COPY to import that file into the table. The problem is, that whilst this process is happening, other queries against this table time out. I've tried to copy into a temp table before doing an "INSERT INTO table (SELECT * FROM temp)", but the second statement still takes a lot of time and causes a loss of performance. So, what's the best way to import my metadata without it affecting the performance of other queries? Thanks, Andrew -- View this message in context: http://www.nabble.com/How-do-I-bulk-insert-to-a-table-without-affecting-read-performance-on-that-table--tp15099164p15099164.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com. ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings