On Fri, Aug 05, 2005 at 10:08:42AM +0100, David Goodenough wrote: > I was looking at an application recently which was written in Java and used > Postgresql as it DB. In it extensive use had been made of PreparedStatements > both for SELECTs and for INSERT, UPDATE and DELETE statements. Some of > the routines had multiple UPDATEs doing much the same thing but with > slightly different parameters. In the comments it was stated that it was > better to prepare lots of statements in advance rather than build one on the > spot (and then prepare it, it needed the substitution) because of the > optimiser. Which version of PostgreSQL was this built for? Until recently there was no support for server side prepared statements so it mattered not one wit whether you had one or a thousand prepared queries, it was all done by the client anyway. > This set me thinking (always dangerous). I can see how a SELECT can be > helped by preparing the statement, but not really how an INSERT could > or, other than the SELECT implicit in the WHERE clause on an UPDATE or > DELETE, how UPDATE or DELETE statements would be helped. For the executors point of view, there is no difference between a SELECT, INSERT, DELETE or UPDATE. Each is doing a query on the database but doing different things with the result. SELECT sends it to the client, UPDATE changes some values and writes the new tuple out, DELETE marks the rows deleted. INSERT ... VALUES () has a trivial plan but INSERT .. SELECT can be complicated. On the client side, prepared statements simplify coding, since they seperate the actual SQL text from the function it performs. So there you should use one statement for each "operation" you perform, whatever that means for your app. On the server side, prepared statements are a way of saving the plan of a query and using it multiple times. So the benefit is related to how many times you use the statement vs how complex the query is (parsing and planning time). If your INSERT statement is simple, why bother with prepared stataments, since the planning time will be almost nil anyway. If your hugely complicated DELETE is only run once, again, no benefit since you're not reusing the plan. Only in the case where you have a query which you execute a lot of times (10, 100, 1000) is it a noticable benefit. Accordingly, several Postgres frontends support prepared stataments, but only actually plan them in the server if you use them more than a predefined number of times. Actually, there is one downside with prepared queries. When processing each query individually, PostgreSQL can use the statistics for the values given to produce the optimal plan for that set. If your value are not "equally distributed" (can't think of a better phrase) then that plan might not be optimal for all the other substitutions you might do. Something to think about. In any case, I hope this has clarified things for you. It's all a tradeoff between code clarity, parsing, planning and execution time. Hope this helps, -- Martijn van Oosterhout <kleptog@xxxxxxxxx> http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them.
Attachment:
pgp3arqmbykRq.pgp
Description: PGP signature