Interesting. Is that plan cached for the life of the session doing the
inserts, the life of the trigger, or until the database is restarted?
I guess I'm trying to figure out how to get the plan to re-cache, without
making it entirely dynamic.
On Thu, 14 Dec 2006, Richard Huxton wrote:
Ben wrote:
When you insert a tuple, it needs to be inserted into the index, yes.
There
is no way an insert can cause a sequential scan, except by some trigger
defined on the table.
Actually, as it happens, there *is* a trigger defined on the table to fire
before insert, but it too uses an index scan, at least according to
explain. Though, you'd think if it actually was using an index scan, that
would be showing up in pg_stat_user_tables, which it isn't. Might the fact
that the trigger is a plpgsql function be throwing it off and keeping it
from using more recent planner stats?
The query-plan for the function will be compiled first time it is called.
From that point on, it is fixed. It seems that is the source of your
seq-scans.
You can use the EXECUTE statement to construct a dynamic version of the
query, which will be planned every time it is run.
--
Richard Huxton
Archonet Ltd