Hi all, I am running a data import in a Postgres 8.0.1 data base. The target table is one with ~ 100 million rows. The imported data is around 40 million rows. The import is inserting 10000 rows per transaction. The table has a few indexes on it, a few foreign constraints, and one insert trigger which normally inserts a row in another table for each insert on this one, but for the duration of the data transfer it was modified to not insert anything for the data which is transfered, but it does an extra lookup on the index of a ~ 10000 big table to get the exclusion condition. Given this context, at transfer start I've had a transfer rate at ~ 1,3 million rows per hour, which dropped progressively to ~ 300000 now after transferring ~ 24 million rows. My question is what could cause this progressive slow-down ? I would have expected a more or less constant transfer rate, as the table was not empty in the first place to say that the indexes slow down as they grow... so after a growth of ~ 20% I have a slow-down for inserts of more than 4 times. What should I suspect here ? I can't just drop the indexes/foreign keys/triggers, the db is in production use. TIA for any ideas, Csaba. ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match