Search Postgresql Archives

Re: How to skip duplicate records while copying from CSV to table in Postgresql using "COPY"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2015-05-24 at 16:56 +0630, Arup Rakshit wrote:
> Hi,
> 
> I am copying the data from a CSV file to a Table using "COPY" command.
> But one thing that I got stuck, is how to skip duplicate records while
> copying from CSV to tables. By looking at the documentation, it seems,
> Postgresql don't have any inbuilt too to handle this with "copy"
> command. By doing Google I got below 1 idea to use temp table.
> 
> http://stackoverflow.com/questions/13947327/to-ignore-duplicate-keys-during-copy-from-in-postgresql
> 
> I am also thinking what if I let the records get inserted, and then
> delete the duplicate records from table as this post suggested -
> http://www.postgresql.org/message-id/37013500.DFF0A64A@xxxxxxxxxxxxxxxxxxxx.
> 
> Both of the solution looks like doing double work. But I am not sure
> which is the best solution here. Can anybody suggest which approach
> should I adopt ? Or if any better ideas you guys have on this task,
> please share.

Assuming you are using Unix, or can install Unix tools, run the input
files through

  sort -u

before passing them to COPY.

Oliver Elphick



-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux