Richard, thanks for your reply!
Richard Huxton schrieb:
Andreas Hartmann wrote:
Dear PostgreSQL community,
first some info about our application:
- Online course directory for a University
- Amount of data: complete dump is 27 MB
- Semester is part of primary key in each table
- Data for approx. 10 semesters stored in the DB
- Read-only access from web application (JDBC)
Our client has asked us if the performance of the application could be
improved by moving the data from previous years to a separate "archive"
application.
If you had 27GB of data maybe, but you've only got 27MB - that's
presumably all sitting in memory.
Here's some info about the actual amount of data:
SELECT pg_database.datname,
pg_size_pretty(pg_database_size(pg_database.datname)) AS size
FROM pg_database where pg_database.datname = 'vvz_live_1';
datname | size
---------------+---------
vvz_live_1 | 2565 MB
I wonder why the actual size is so much bigger than the data-only dump -
is this because of index data etc.?
What in particular is slow?
There's no particular bottleneck (at least that we're aware of). During
the first couple of days after the beginning of the semester the
application request processing tends to slow down due to the high load
(many students assemble their schedule). The customer upgraded the
hardware (which already helped a lot), but they asked us to find further
approaches to performance optimiziation.
-- Andreas
--
Andreas Hartmann, CTO
BeCompany GmbH
http://www.becompany.ch
Tel.: +41 (0) 43 818 57 01