I am developing an application that has very predictable database operations: -inserts several thousand rows into 3 tables every 5 minutes. (table contain around 10 million rows each) -truncates and rebuilds aggregate tables of this data every 5 minutes. (several thousand rows each) -regular reads of aggregate table and sometimes large tables by user interaction -every night, hundreds of thousands of rows are deleted from these 3 tables (old data) -20-30 other tables get inserted/updated slowly throughout the day In order to optimize performance of the inserts, I disabled autovacuum/row-level stats and instead run "vacuum analyze" on the whole DB every hour. However this operation takes around 20 minutes of each hour. This means that the database is involved in vacuum/analyzing tables 33% of the time. I'd like any performance advice, but my main concern is the amount of time vacuum/analyze runs and its possible impact on the overall DB performance. Thanks! I am running 8.2 (will be 8.2.1 soon). The box is Windows with 2GB RAM connected to a SAN over fiber. The data and pg_xlog are on separate partitions. Modified configuration: effective_cache_size = 1000MB random_page_cost = 3 default_statistics_target = 50 maintenance_work_mem = 256MB shared_buffers = 400MB temp_buffers = 10MB work_mem = 10MB max_fsm_pages = 1500000 checkpoint_segments = 30 stats_row_level = off stats_start_collector = off