Hello,
well, I'm new to postgres and this is my post on this list :-)
Anyway, I've to batch-import bulk-csv data into a staging database (as
part of an ETL-"like" pocess). The data ought to be read via STDIN,
however for keeping in simple and stupid, saving it to a file and
importing afterwards is also an option. Sticking my nose into the docs,
I noticed that copy[1] as well as pg_import[2] are able to do it.
However, there are some additional columns of the staging table (job id,
etc.) that have to be set in order to identify imported rows. These
attributes are not part of the data coming from STDIN (since its
meta-data) and I see no way for specifying default values for "missing"
cvs columns. (imho copy and pg_bulkload will use table defaults for
missing rows - do I miss something?).
Thus - do you have any clue on designing an fast bulk-import for staging
data?
Thanks in advance,
Thomas
[1] http://www.postgresql.org/docs/9.0/static/sql-copy.html
[2] http://pgbulkload.projects.postgresql.org/pg_bulkload.html
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general