Hi Oliver,
I have suggested a few changes below.On Tue, Feb 16, 2016 at 7:15 PM, Oliver Jagape <oliver.jagape@xxxxxxxxxxxxxx> wrote:
CPU: 16 cores
RAM: 48GB
Disk for Data is at a fiber channel SAN storage.
max_connections = 1024
If you need that many concurrent connections, invest in a connection pooler. If no connection pooler is used, knock off 800 connections if you can.
shared_buffers = 8000MB
Test with a lower setting like 4GB, depending on your workload, that may or may not perform better.
work_mem = 4096MB
This magnitudes too high. It is per query * sort and aggregation steps. If you expect queries with large result sets (or distinct, group by, etc), 100MB is probably a reasonable starting point.
maintenance_work_mem = 512MB
Depending on your relation size, you might want to double this.
checkpoint_segments = 512
Is that a reason you chose such a high number?
enable_nestloop = off
Unless you have a clear reason to disable this (it applies to all queries), leave this enabled.
autovacuum = off
autovacuum should be on unless you know your data churn patterns and have scheduled manual vacuums.
autovacuum_naptime = 60
You probably want that more aggressive, like 5s.