On Thu, Mar 3, 2011 at 9:13 AM, Matt Burke <mattblists@xxxxxxxxxxxxx> wrote: > Hi. I've only been using PostgreSQL properly for a week or so, so I > apologise if this has been covered numerous times, however Google is > producing nothing of use. > > I'm trying to import a large amount of legacy data (billions of > denormalised rows) into a pg database with a completely different schema, > de-duplicating bits of it on-the-fly while maintaining a reference count. > The procedures to do this have proven easy to write, however the speed is > not pretty. I've spent some time breaking down the cause and it's come down > to a simple UPDATE as evidenced below: PostgreSQL uses MVCC, which means that transactions see a snapshot of the database at existed at a certain point in time, usually the beginning of the currently query. Old row versions have to be kept around until they're no longer of interest to any still-running transaction. Sadly, our ability to detect which row versions are still of interest is imperfect, so we sometimes keep row versions that are technically not required. Unfortunately, repeated updates by the same transaction to the same database row are one of the cases that we don't handle very well - all the old row versions will be kept until the transaction commits. I suspect if you look at the problem case you'll find that the table and index are getting bigger with every set of updates, whereas when you do the updates in separate transactions the size grows for a while and then levels off. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance