On Mon, Jun 27, 2011 at 5:37 PM, <tv@xxxxxxxx> wrote: >> The mystery remains, for me: why updating 100,000 records could complete >> in as quickly as 5 seconds, whereas an attempt to update a million >> records was still running after 25 minutes before we killed it? > > Hi, there's a lot of possible causes. Usually this is caused by a plan > change - imagine for example that you need to sort a table and the amount > of data just fits into work_mem, so that it can be sorted in memory. If > you need to perform the same query with 10x the data, you'll have to sort > the data on disk. Which is way slower, of course. > > And there are other such problems ... I would rather assume it is one of the "other problems", typically related to handling the TX (e.g. checkpoints, WAL, creating copies of modified records and adjusting indexes...). Kind regards robert -- remember.guy do |as, often| as.you_can - without end http://blog.rubybestpractices.com/ -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance