However, each night we load data from a legacy cobol system into the SQL
system and then we summarize that data to make the reports faster. This
load process is intensely insert/update driven but also has a hefty
amount of selects as well. This load process is taking ever longer to
complete.
How many rows do you typically load each night? If it is say less than 10% of the total rows, then perhaps the suggestion in the next paragraph is reasonable.
SO ... our goal here is to make this load process take less time. It
seems the big part is building the big summary table; this big summary
table is currently 9 million rows big. Every night, we drop the table,
re-create it, build the 9 million rows of data (we use COPY to put hte
data in when it's prepared, not INSERT), and then build the indexes on it
Perhaps, placing a trigger on the source table and building a "change log" would be useful. For example, you could scan the change log (looking for insert, update, and deletes) and integrate those changes into your summary table. Obviously if you are using complex aggregates it may not be possible to adjust the summary table, but if you are performing simple SUM's, COUNT's, etc. then this is a workable solution.
--
Chad
http://www.postgresqlforums.com/