Search Postgresql Archives

Re: processing large amount of rows with plpgsql

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > There is (almost) no way to
> > force commit inside a function --
> 
> So what you are saying is that this behavior is normal and we should
> either equip ourselves with enough disk space (which I am trying now,
> it is a cloud server, which I am resizing to gain more disk space and
> see what will happen) or do it with an external (scripting) language?
> 

Hello,

a relative simple way to workaround your performance/resource problem is
to slice the update.

e.g.:

create function myupdate(slice int) ...

for statistics_row in 
   SELECT * FROM statistics 
   WHERE id % 16 = slice
   or:
   WHERE hashtext(id::text) % 16 = slice
   ...

and then call your function with the values 1..15 (when using 16 slices)

Use a power of 2 for the number of slices. 

It may be faster to use many slices and 
this allows to do the job in parallel on a few threads.

HTH,

Marc Mamin

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux