Search Postgresql Archives

Re: Optimizing bulk update performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2013-04-27, Yang Zhang <yanghatespam@xxxxxxxxx> wrote:
> On Sat, Apr 27, 2013 at 1:55 AM, Misa Simic <misa.simic@xxxxxxxxx> wrote:

>> Optionaly you can run vacuum analyze after bulk operation...
>
> But wouldn't a bulk UPDATE touch many existing pages (say, 20%
> scattered around) to mark rows as dead (per MVCC)?  I guess it comes
> down to: will PG be smart enough to mark dead rows in largely
> sequential scans (rather than, say, jumping around in whatever order
> rows from foo are yielded by the above join)?

A plpgsql FOR-IN-query loop isn't going to be that smart, it's a
procedural language ans does things procedurally, if you want to do
set operations use SQL.

this:

 UPDATE existing-table SET .... FROM temp-table WHERE join-condition;
 
will likely get you a sequential scan over the existing table 

and should be reasonably performant as long as temp-table is small
enough to fit in memory.

-- 
⚂⚃ 100% natural



-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux