There is a procedure elsewhere that generate a file containing insert into table1 (colname1, colname2, colname3,...) values(....); insert (colname1, colname2, colname3,...) values(....); insert (colname1, colname2, colname3,...) values(....); update table1 set colname1=value, colname2=col2... where id=...; update table1 set colname1=value, colname2=col2... where id=...; and I've a target table on pg that share the same structure but not the column names :( I *could* change the target column names but I'd like to avoid it especially cos I'd like to become more independent from the source structure. The information on update rather than insert is LOST after that file is produced so the only thing I get is that file. I see 2 ways to import that data: - awk/sed - creating a table with exactly the same structure and then reconstruct if records had to be inserted or updated in the final target. A trick could be to add a trigger and 2 additional columns (inserted timestamp, updated timestamp) on the first target so that I could easily filter which rows where updated and which were inserted. This could be a "general" import problem but I can't see how to exploit 2 characteristics that may help me to speed up the thing: - I know which one are insert and which are update - the structure of imported data is nearly the same in the target. It would be nice if I could overcome cheaply the problem of the differences in column names so I could delay the problem enough till the exported format will change to a better one. do update if(not found) insert incur in a major slowdown? -- Ivan Sergio Borgonovo http://www.webthatworks.it