Trevor Talbot wrote:
Another is because I want transactions to start and end in the database,
not in external application code which might crash before a COMMIT.
Hmm, how do you handle this logically? Do your applications never
need to submit chunks of work at once? Or do you do something like
fill in a temporary table, and have a proc work from that?
Of course not "never" but the goal is one logical database transaction
per page invocation. It's a discipline I got into when I was working on
a site that clocked 3/4 billion page views month on a severely overtaxed
Sun Oracle box. A lot of it involves caching on the web server. For
instance, static data like pulldown data gets cached and refreshed
either by expiry or by the application blowing the cache after an
update. If I have a thousand users pulling the same list of physicians
a hundred times a day, I cache it. If users need to view a complex
report where the dependent data only changes once every 24 hours (like
an adserver pull), I cache it and have an offline job rebuild it when
fresh data becomes available.
Result sets get cached either in the user's session or in hidden DIVs to
reduce redundant database calls.
I build database-intensive applications starting at the database,
beginning with an ERD, then the proc primitives. Then I wireframe the
application from the perspective of the database, building macros of
procedures. After testing, these go into the application's API. The
brass ring is being able to take a validated, prepared and filtered POST
array and send it to a stored procedure. And rather than building a
200+ line SELECT statement in the application, building a view to
encapsulate it.
I'm not saying that's the best way to build an application. It's just
what I'm most comfortable doing and it works for me.
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match