Benjamin Arai wrote:
For the most part the updates are simple one liners. I currently commit
in large batch to increase performance but it still takes a while as
stated above. From evaluating the computers performance during an
update, the system is thrashing both memory and disk. I am currently
using Postgresql 8.0.3.
Example command "UPDATE data where name=x and date=y;".
Before you start throwing the baby out with the bathwater by totally
revamping your DB architecture, try some simple debugging first to see
why these queries take a long time. Use explain analyze, test
vacuuming/analyzing mid-updates, fiddle with postgresql.conf parameters
(the wal/commit settings especially). Try using using commit w/
different amounts of transactions -- the optimal # won't be the same
across all development tools.
My own experience is that periodic vacuuming & analyzing are very much
needed for batches of small update commands. For our batch processing,
autovacuum plus 1K-10K commit batches did the trick in keeping
performance up.