On Thu, Jun 28, 2012 at 2:12 PM, Berend Tober <btober@xxxxxxxxxxxxxxx> wrote: > Robert Buckley wrote: >> >> I have to create a script which imports csv data into postgresql >> The csv data is automatically created from an external database >> so I have no influence over which columns etc are downloaded. >> >> How can I best create a table for the import? >> > > This is what I do: > > 1) I have a table defined for import which reflects the CSV structure of the > data to be imported, i.e., since I know what the fields are on each line of > the CSV, this template table has columns defined to accommodate each known > field. This table never actually gets data written to it. > > 2) To import data, my script creates a temporary import table LIKE the > import template table. > > 3) The script then transfers and transform the data from the temporary > import table to another permanent table that has the structure, including a > primary key, that is more useful for my purposes. (It omits some of the > columns which I do not really need from the CSV, uses a different name for > one column, and adds some reference information. You could do calculations > here as well.) > > 4) The temporary import table is deleted at the end of the import session. > With the file_fdw you can skip some steps: http://www.postgresql.org/docs/9.1/static/file-fdw.html You can SELECT from the CSV file. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general