Search Postgresql Archives

Re: exceptionally large UPDATE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 27, 2010 at 10:26 PM, Ivan Sergio Borgonovo
<mail@xxxxxxxxxxxxxxx> wrote:
> I'm increasing maintenance_work_mem to 180MB just before recreating
> the gin index. Should it be more?
>

You can do this on a per-connection basis; no need to alter the config
file.  At the psql prompt (or via your script) just execute the query

SET maintenance_work_mem="180MB"

If you've got the RAM, just use more of it.  'd suspect your server
has plenty of it, so use it!  When I reindex, I often give it 1 or 2
GB.  If you can fit the whole table into that much space, you're going
to go really really fast.

Also, if you are going to update that many rows you may want to
increase your checkpoint_segments.  Increasing that helps a *lot* when
you're loading big data, so I would expect updating big data may also
be helped.  I suppose it depends on how wide your rows are.  1.5
Million rows is really not all that big unless you have lots and lots
of text columns.

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux