On Wed, 2007-08-22 at 20:41 +0100, Richard Huxton wrote: > Murali Maddali wrote: > > This is what I am doing, I am reading the data from SQL Server 2005 and > > dumping to out to Postgresql 8.2 database. My 2 cents.. I'm doing roughly the same thing, but I'm using perl and DBI to do it. > Fastest way to load data into PG is via COPY, don't know if npgsql > driver supports that. If not, you'd have to go via a text-file. > > Load the data into an import table (TEMPORARY table probably) and then > just use three queries to handle deletion, update and insertion. > Comparing one row at a time is adding a lot of overhead. My way of doing it.. 1. pull from SQL Server via DBI to temp csv file. 2. Import via \copy into PG to temp table. begin transaction 3. DElete duplicate pkey entries in actual table 4. insert new entries into actual table 5, truncate temp table 6. update a log file end transaction. works great.. Note on [3]..all data are new.. so instead of just doing update, I resorted to doing a delete like the mysql's mysqlimport --replace command. (my choice) ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match