Database Optimization and Peformance
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- Subject: Database Optimization and Peformance
- From: Joe McClintock <joe.mcclintock@xxxxxxxxxx>
- Date: Wed, 30 Aug 2006 11:29:50 -0700
- User-agent: Thunderbird 1.5.0.5 (Windows/20060719)
I have inherited the management of a Postgresql 8.0.3 database of around
2.3 GB and growing. I’m still coming up to speed on Postgresql db
administration so I’m not an expert although I have had quite a bit of
experience with Oracle for what that is worth.
Recently we had an issue with extreme performance degradation on a pair
of key tables where a simple “select * from [table name]” had gone for
1.08 mill sec 37 to 45 sec. This caused a very negative impact on the
performance of our web application. We had just loaded about 1.2 M rows
of data to the db on the previous Wednesday and on Thursday and Friday
the database and application performance was good. Weekend activity is
pretty low so it was surprising to find on Monday the db and application
performance in terrible shape. If the data load degraded database
performance why did it take 4 days for the problem to manifest? Regular
nightly backups are done but as far as I can tell vacuum, analyze and
reindex has been done only when needed which to this point has not been
very often.
I ran a vacuum, analyze and reindex on the database with no change in
performance, query time was still 37+ sec, a little worse. On our test
system I found that a db_dump from production and then restore brought
the database back to full performance. So in desperation I shut down the
production application, backed up the production database, rename the
production db, create a new empty production db and restored the
production backup to the empty db. After a successful db restore and
restart of the web application, everything was then up and running like
a top.
My concern is this, doing a backup and restore does not seem an
appropriate way manage database fragmentation and performance. The
documentation I have read indicates that vacuum, analyze reindex are the
tools to use to de-fragment and optimize the database. In my case they
did not work and reindexing made query performance slightly worse. Am I
missing something? As the database grows, will I need to essentially
rebuild the db on a regular basis?
Thanks
[Index of Archives]
[KVM ARM]
[KVM ia64]
[KVM ppc]
[Virtualization Tools]
[Spice Development]
[Libvirt]
[Libvirt Users]
[Linux USB Devel]
[Linux Audio Users]
[Yosemite Questions]
[Linux Kernel]
[Linux SCSI]
[XFree86]