hi, i have a table where i need to update 7million rows. i'm trying to do this without a downtime, but doesn't matter what i do, i get massive slowdowns every 5 minutes. details: the table's schema contains 6integers, 2timestamps, 1 varchar, and 1text. i added a new text-field (currently null), and now i need to update it to the correct value(s). basically i need to do 3 queries: update mytable set textfield='text1' where field1=3 and field2=1 and textfield is null; update mytable set textfield='text2' where field1=4 and field2=1 and textfield is null; update mytable set textfield='' where textfield is null; doing them this way would probably cause the database to slow down too much (well, i haven't tried doing it with the live database, but i don't like doing such 'unlimited' queries, because i cannot track it's progress. so for example if it's already running for 5 minutes, i do not know how to see the 'completion percentage' :-( ) so i'm trying to do it 'step by step', like: update mytable set textfield='text1' where idfield in (select idfield from mytable where field1=3 and field2=1 and textfield is null limit 1000); i tried different limits, waiting between queries, but whatever i do, every 5 minutes the db slows down. i checked the documentation and it seems it's doing the CHECKPOINT every 5 minutes. is there anything i could do to make this work better? the documentation recommends to increase checkpoint_timeout, but the current value is generally fine for us, except now, when i need to do this mass-update. could i just do CHECKPOINT commands "manually"? the idea is that if i do it more often than every 5 minutes, maybe the slowdown will not be so massive? or is there perhaps some other way how to 'mass-update' a table in a more efficient way? thanks, gabor -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general