Ludwik Dylag <ldylag@xxxxxxxxx> wrote: > I would suggest: > 1. turn off autovacuum > 1a. ewentually tune db for better performace for this kind of > operation (cant not help here) > 2. restart database > 3. drop all indexes > 4. update > 5. vacuum full table > 6. create indexes > 7. turn on autovacuum I've only ever attempted something like that with a few tens of millions of rows. I gave up on waiting for the VACUUM FULL step after a few days. I some scheduled down time is acceptable (with "some" kind of hard to estimate accurately) the best bet would be to add the column with the USING clause to fill in the value. (I think that would cause a table rewrite; if not, then add something to the ALTER TABLE which would.) My impression is that the OP would rather stretch out the implementation than to suffer down time, which can certainly be a valid call. If that is the goal, then the real question is whether there's a way to tune the incremental updates to speed that phase. Carlo, what version of PostgreSQL is this? Can you show us the results of an EXPLAIN ANALYZE for the run of one iteration of the UPDATE? Information on the OS, hardware, PostgreSQL build configuration, and the contents of postgresql.conf (excluding all comments) could help us spot possible techniques to speed this up. -Kevin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance