I've to make large UPDATE to a DB. The largest UPDATE involve a table that has triggers and a gin index on a computed tsvector. The table is 1.5M records with about 15 fields of different types. I've roughly 2.5-3Gb of ram dedicated to postgres. UPDATE queries are simple, few of them use join and mainly consist of updating records from temporary tables that contains a very similar structure to the target. This updates are rare so I can afford to tune postgresql just for this large update and then return to a more balanced configuration. I can even afford to be the only user of the DB so responsiveness of the application using the DB is not an issue. Duration of the update is. Anything I can tune in postgresql.conf to speed up the UPDATE? I'm increasing maintenance_work_mem to 180MB just before recreating the gin index. Should it be more? The update should be monolithic and it is inside a single transaction. Since I can afford to be the only user of the DB for a while, is there anything I can tune to take advantage of it? What else could I change to speed up the update? The triggers recreate the tsvector. One of the component of the tsvector is taken from a join table. I'll surely drop the gin index and recreate it when everything is over. I'm not sure if it's a good idea to drop the triggers since I'll have to update the tsvectr later and I suspect this will cause twice the disk IO. thanks -- Ivan Sergio Borgonovo http://www.webthatworks.it -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general