Andreas Hartmann wrote:
Here's some info about the actual amount of data:
SELECT pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database where pg_database.datname = 'vvz_live_1';
datname | size
---------------+---------
vvz_live_1 | 2565 MB
I wonder why the actual size is so much bigger than the data-only dump -
is this because of index data etc.?
I suspect Guillame is right and you've not been vacuuming. That or
you've got a *LOT* of indexes. If the database is only 27MB dumped, I'd
just dump/restore it.
Since the database is read-only it might be worth running CLUSTER on the
main tables if there's a sensible ordering for them.
What in particular is slow?
There's no particular bottleneck (at least that we're aware of). During
the first couple of days after the beginning of the semester the
application request processing tends to slow down due to the high load
(many students assemble their schedule). The customer upgraded the
hardware (which already helped a lot), but they asked us to find further
approaches to performance optimiziation.
1. Cache sensibly at the application (I should have thought there's
plenty of opportunity here).
2. Make sure you're using a connection pool and have sized it reasonably
(try 4,8,16 see what loads you can support).
3. Use prepared statements where it makes sense. Not sure how you'll
manage the interplay between this and connection pooling in JDBC. Not a
Java man I'm afraid.
If you're happy with the query plans you're looking to reduce overheads
as much as possible during peak times.
4. Offload more of the processing to clients with some fancy ajax-ed
interface.
5. Throw in a spare machine as an app server for the first week of term.
Presumably your load is 100 times average at this time.
--
Richard Huxton
Archonet Ltd