Richard, I have added transaction to my code and it took about 2 and half hours to process around 48,000 records. Again all this time is taken by update method on the adapter. I don't know Perl to setup the database link to SQL Server 2005 and also I don't have permission to write the data to files. Are there any other options like a different driver I can use or through stored procedures. I have to compare each column in each row before doing the update. Your suggestions and comments are greatly appreciated. Thank you, Murali K. Maddali 256-705-5191 murali.maddali@xxxxxxx -----Original Message----- From: Richard Huxton [mailto:dev@xxxxxxxxxxxx] Sent: Wednesday, August 22, 2007 2:41 PM To: Murali Maddali Cc: pgsql-general@xxxxxxxxxxxxxx Subject: Re: Adapter update. Murali Maddali wrote: > This is what I am doing, I am reading the data from SQL Server 2005 > and dumping to out to Postgresql 8.2 database. > while (r.Read()) > _save(r, srcTblSchema, destTbl, destConn); > > r.Close(); > > > // This is the where my application goes into lala land. > If I call this update in my while loop above, it took about two hours > to process > // the whole thing > adp.Update(destTbl); That's probably because it was doing each update in its own transaction. That'll require committing each row to disk. > I have around 60000 records. I also have a geometry field on my table. > > I have couple of questions. > > 1) What do I do to speed up the process? Any database configuration > changes, connection properties, .... Well, if you're doing it all in its own transaction it should be fairly quick. You might also find the DBI-link project useful, if you know any Perl. That would let you reach out directly from PG to the SQL-Server database. http://pgfoundry.org/projects/dbi-link/ > 2) When I call the adapter.update does NpgsqlDataAdapter checks to see > if the column value really changed or not? I believe SQLDataAdapter > does this validation before it actually writes to the database. Sorry, don't know - but you have the source, should be easy enough to check. If not, I'm sure the npgsql people would be happy of a patch. > Any suggestions and comments are greatly appreciated. Right now I am > in dead waters and can't get it to work on large datasets. Fastest way to load data into PG is via COPY, don't know if npgsql driver supports that. If not, you'd have to go via a text-file. Load the data into an import table (TEMPORARY table probably) and then just use three queries to handle deletion, update and insertion. Comparing one row at a time is adding a lot of overhead. -- Richard Huxton Archonet Ltd This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the sender. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org/