Ivan Sergio Borgonovo wrote:
On Tue, 04 Dec 2007 08:14:56 +0000
Richard Huxton <dev@xxxxxxxxxxxx> wrote:
Unless it's an obvious decision (millions of small identical
queries vs. occasional large complex ones) then you'll have to
test. That's going to be true of any decision like this on any
system.
:(
I'm trying to grasp a general idea from the view point of a developer
rather than a sysadmin. At this moment I'm not interested in
optimisation, I'm interested in understanding the trade off of
certain decisions in the face of a cleaner interface.
Always go for the cleaner design. If it turns out that isn't fast
enough, *then* start worrying about having a bad but faster design.
Most of the documents available are from a sysadmin point of view.
That makes me think that unless I write terrible SQL it won't make a
big difference and the first place I'll have to look at if the
application need to run faster is pg config.
The whole point of a RDBMS is so that you don't have to worry about
this. If you have to start tweaking the fine details of these things,
then that's a point where the RDBMS has reached its limits. In a perfect
world you wouldn't need to configure PG either, but it's not that clever
I'm afraid.
Keep your database design clean, likewise with your queries, consider
whether you can cache certain results and get everything working first.
Then, look for where bottle-necks are, do you have any unexpectedly
long-running queries? (see www.pgfoundry.org for some tools to help with
log analysis)
This part (for posterity) looks as the most interesting for
developers:
http://www.gtsm.com/oscon2003/toc.html
Starting from Functions
Note that this is quite old now, so some performance-related assumptions
will be wrong for current versions of PG.
Still I can't understand some things, I'll come back.
--
Richard Huxton
Archonet Ltd
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/