Hello Tim, >> I have increased the value for /temp_buffers/ server parameter from the >> default 8 MB to 128 MB. However, this change did not affect the INSERT >> time >> for temporary tables. >It isn't clear why you create vectors of strings rather than just select >into or something similar. >There are no 'quick fixes' which can be applied without real analysis of >the system. However, based on the limited information available, you may >want to consider - >- Increase work_mem to reduce use of temp files. Need it to be 2 to 3 > times largest temp file (but use common sense) I have already increased the work_mem and maintenance_work_mem to 256MB. I will check on the temp file sizes and adjust the work_mem parameter as you suggested. >- Tweak wal checkpoint parameters to prevent wal checkpoints occurring > too frequently. Note that there is a play off here between frequency > of checkpoints and boot time after a crash. Fewer wal checkpoints will > usually improve performance, but recovery time is longer. >- Verify your inserts into temporary tables is the bottleneck and not > the select from existing data (explain plan etc and adjust indexes > accordingly). In few use-cases, I see that multiple inserts took 150 seconds out of total database processing time of 175 seconds, and hence, the focus is on these insert statements. I have run ANALYZE statement followed by INSERT INTO temporary tables, before the temporary tables are used in joins in subsequent queries. This reduced the subsequent query processing times due to the updated statistics. I will look into adding indexes for these temporary tables as well. >How effectively you can increase insert times will depend on what the >memory and cpu profile of the system is. More memory, less use of temp >files, faster system, so spend a bit of time to make sure your system is >configured to squeeze as much out of that RAM as you can! Thank you for the suggestions. I will try these out. -- Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html