On Sun, 2015-05-24 at 18:25 +0630, Arup Rakshit wrote: > > > > Assuming you are using Unix, or can install Unix tools, run the input > > files through > > > > sort -u > > > > before passing them to COPY. > > > > Oliver Elphick > > > > I think I need to ask more specific way. I have a table say `table1`, > where I feed data from different CSV files. Now suppose I have > inserted N records to my table `table1` from csv file `c1`. This is > ok, next time when again I am importing from a different CSV file say > `c2` to `table1`, I just don't want reinsert any record from this new > CSV file to table `table1`, if the current CSV data already table has. > > How to do this? Unix tools are still the easiest way to deal with it, I think. Ensure the total input is unique as above and stored in file1. Use COPY to output the existing table to another text file (file2) with similar format to file1. Then cat file1 file2 | sort | uniq -d >file3 This will only output lines that exist in both file1 and file2. Then cat file1 file3 | sort | uniq -u >newinputfile This will eliminate from file1 lines that are already in file2. It will only eliminate lines that are entirely identical; it won't stop duplicate primary keys. Oliver Elphick -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general