Bill Thoen wrote:
Does PG (8.1) ever use existing indexes when executing an UPDATE?
I've got some tables with millions of records and whenever I update a
column that involves most or all the records the EXPLAIN command seems
to indicate that it isn't using the pre-existing indexes. This result in
a slow update, which is further slowed by the presence of indexes. So
when doing a large update should I just drop the indexes first, or is
there some good reason to keep them?
Joshua Tolley explained why it's doing a sequential scan, and why that's
a good thing.
As for the added cost of maintaining indexes when doing the UPDATE -
yes, you might want to consider dropping the index(es) before issuing
the UPDATE and then recreating it/them afterwards. That can be
considerably faster.
I have the feeling you'd need to drop the index then COMMIT before you
ran the update and recreated the index, though, since Pg probably can't
really get rid of the index if it's still visible to other transactions
and might be restored by a ROLLBACK anyway. I'm not sure, though -
explicit locking might be used to handle that, I haven't looked into it.
--
Craig Ringer
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general