Murali Maddali wrote:
This is what I am doing, I am reading the data from SQL Server 2005 and
dumping to out to Postgresql 8.2 database.
while (r.Read())
_save(r, srcTblSchema, destTbl, destConn);
r.Close();
// This is the where my application goes into lala land.
If I call this update in my while loop above, it took about two hours to
process
// the whole thing
adp.Update(destTbl);
That's probably because it was doing each update in its own transaction.
That'll require committing each row to disk.
I have around 60000 records. I also have a geometry field on my table.
I have couple of questions.
1) What do I do to speed up the process? Any database configuration changes,
connection properties, ....
Well, if you're doing it all in its own transaction it should be fairly
quick.
You might also find the DBI-link project useful, if you know any Perl.
That would let you reach out directly from PG to the SQL-Server database.
http://pgfoundry.org/projects/dbi-link/
2) When I call the adapter.update does NpgsqlDataAdapter checks to see if
the column value really changed or not? I believe SQLDataAdapter does this
validation before it actually writes to the database.
Sorry, don't know - but you have the source, should be easy enough to
check. If not, I'm sure the npgsql people would be happy of a patch.
Any suggestions and comments are greatly appreciated. Right now I am in dead
waters and can't get it to work on large datasets.
Fastest way to load data into PG is via COPY, don't know if npgsql
driver supports that. If not, you'd have to go via a text-file.
Load the data into an import table (TEMPORARY table probably) and then
just use three queries to handle deletion, update and insertion.
Comparing one row at a time is adding a lot of overhead.
--
Richard Huxton
Archonet Ltd
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings