> I think I need to ask more specific way. I have a table say `table1`, where I feed data from different CSV files. Now suppose I have inserted N records to my table `table1` from csv file `c1`. This is ok, next time when again I am importing from a different CSV file say `c2` to `table1`, I just don't want reinsert any record from this new CSV file to table `table1`, if the current CSV data already table has. > > How to do this? > > My SO link is not a solution to my problem I see now. > > -- > ================ > Regards, > Arup Rakshit > ================ > Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. > > --Brian Kernighan > > Assuming that these CSV files are coming from an external source (e.g. Bank statement transactions for feeding into a Bank Rec module) then you need a program to read the file and handle it accordingly. If end users are running this, then they would probably appreciate a little report about what was loaded and what was discarded. On the other hand, if DBA's are running this you could use ExecuteQuery (written in Java) that has a facility to load CSV files and it will report the duplicates. However, you can ignore the duplicates and still commit the non duplicated transactions to the table, if you so desire. The default for EQ is NOT to run in auto-commit mode, so you have to actually issue a "commit" to save your work. However, this option can be changed in your preferences. HTH, Robert -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general