Does anyone have experience or advice on how to efficiently issue a large number of COPY statements? The data (US Census) comes in > 100 "segments" (each will be copied to its own database tables) for each state (51), for a total of > 5000 text files. I can generate the COPY statements with a script. The two specific question I can think of (but I'm sure there's more that I'm not thinking of) are: 1) "COPY is fastest when used within the same transaction as an earlier CREATE TABLE or TRUNCATE command. In such cases no WAL needs to be written, because in case of an error, the files containing the newly loaded data will be removed anyway." Would I be able to take advantage of this if I: BEGIN; TRUNCATE import_table; COPY import_table FROM 'file1'; COPY import_table FROM 'file2'; ... COPY import_table FROM 'file51'; END; 2) Is there a performance hit to doing a COPY to more than one table in the same transaction? Any other advice will be appreciated. Regards, --Lee -- Lee Hachadoorian PhD, Earth & Environmental Sciences (Geography) Research Associate, CUNY Center for Urban Research http://freecity.commons.gc.cuny.edu/ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general