Hi, I'm planning to use prepared statements of indefinite lifetime in a
daemon that will execute same statements rather frequently in reply to
client requests.
This link:
http://www.postgresql.org/docs/8.3/static/sql-prepare.html
has a note on performance:
In some situations, the query plan produced for a prepared statement
will be inferior to the query plan that would have been chosen if the
statement had been submitted and executed normally. This is because when
the statement is planned and the planner attempts to determine the
optimal query plan, the actual values of any parameters specified in the
statement are unavailable. PostgreSQL collects statistics on the
distribution of data in the table, and can use constant values in a
statement to make guesses about the likely result of executing the
statement. Since this data is unavailable when planning prepared
statements with parameters, the chosen plan might be suboptimal.
I don't understand why postgres couldn't plan this:
SELECT foo.e, foo.f
FROM foo
WHERE pk=$1 AND b=$2 AND status='1' AND c <= $3;
to be later executed any slower than
SELECT foo.e, foo.f
FROM foo
WHERE pk='abcabc' AND b='13' AND status='1' AND c <= '2007-11-20 13:14:15';
Can I help it make more educated guesses? In what scenarios could
prepared statements turn around and bite me, being slower than simple
queries? Is this a real problem in practice? Should I "refresh" prepared
statements from time to time? If so, how? Only by deallocating them and
preparing anew? Any knob to tweak for that?
Okay, enough questions :)
Thank you for any insights.
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings