padusuma <venkata.adusumalli@xxxxxxxxx> writes: > Hello Tim, > > I have tried the suggestions provided to the best of my knowledge, but I did > not see any improvement in the INSERT performance for temporary tables. The > Linux host on which PostgreSQL database is installed has 32 GB RAM. > Following are current settings I have in postgresql.conf file: > shared_buffers = 8GB > temp_buffers = 256MB > work_mem = 256MB > maintenance_work_mem = 256MB > wal_buffers = 256MB > > checkpoint_timeout = 30min > checkpoint_completion_target = 0.75 > max_wal_size = 1GB > > effective_cache_size = 16GB > >>>- Increase work_mem to reduce use of temp files. Need it to be 2 to 3 >>> times largest temp file (but use common sense) > >>I have already increased the work_mem and maintenance_work_mem to 256MB. I >>will check on the temp file sizes and adjust the work_mem parameter as you >>suggested. > >>- Tweak wal checkpoint parameters to prevent wal checkpoints occurring >> too frequently. Note that there is a play off here between frequency >> of checkpoints and boot time after a crash. Fewer wal checkpoints will >> usually improve performance, but recovery time is longer. > >>How effectively you can increase insert times will depend on what the >>memory and cpu profile of the system is. More memory, less use of temp >>files, faster system, so spend a bit of time to make sure your system is >>configured to squeeze as much out of that RAM as you can! > > Please let me know if there are any other suggestions that I can try. How are you gathering metrics to determine if performance has improved or not? Have you seen any change in your explain (analyze, buffers) plans? Make sure your table statistics are all up-to-date before performing each benchmark test. I often turn off autovacuum when doing this sort of testing so that I know exactly when tables get vacuumed and statistics get updated (just ensure you remember to turn it back on when your finished!). Are the wal checkpoints being triggered every 30 mins or more frequently? Are you still seeing the system use lots of temp files? Do you have any indexes on the tables your inserting into? As mentioned previously, there are no simple/quick fixes here - you cannot just change a setting and see performance improve. It will be necessary to do a lot of experimentation, gathering statistics and investigate how postgres is using buffers, disk IO etc. All of these parameters interact with each other, so it is critical you have good metrics to see exactly what your changes do. It is complex and time consuming. Highly recommend PostgreSQL: High Performance (Ahmed & SMith) and Mastering Postgres (Shonig) for valuable background/tips - there really is just far too much to communicate effectively via email. Tim -- Tim Cross