On Thu, Aug 4, 2011 at 2:38 PM, Kevin Grittner <Kevin.Grittner@xxxxxxxxxxxx> wrote: > Claire Chang <yenhsiac@xxxxxxxxx> wrote: > >> hi, We recently bought a 4 8core 128G memory database server and I >> am setting it up to replace our old 4 4cores 128G memory database >> server as a master. The memory related settings that we use on >> the old machine seem a bit wrong according to the experts on IRC: > >> max_connections = 600 > > You're probably going to get better performance by setting that to 2 > to 3 times the number of actual cores (don't county hyperthreading > for this purpose), and using a connection pooler to funnel the 600 > user connections down to a smaller number of database connections. > >> shared_buffers = 32GB > > I seem to remember seeing some benchmarks showing that performance > falls off after 10GB or 20GB on that setting. > >> effective_cache_size = 64GB > > Seems sane. > >> work_mem = 5MB > > You could bump that up, especially if you go to the connection pool. > >> maintenance_work_mem = 1GB > > OK, but I might double that. > >> wal_buffers = 64kB > > This should definitely be set to 16MB. Agreed with everything so far. A few other points. If you're doing a LOT of writing, and the immediate working set will fit in less shared_buffers then lower it down to something in the 1 to 4G range max. Lots of write and a large shared_buffer do not mix well. I have gotten much better performance from lowering shared_buffers on machines that need to write a lot. I run Ubuntu 10.04 for my big postgresql servers right now. With that in mind, here's some pointers. I'd recommend adding this to rc.local: # turns off swap /sbin/swapoff -a I had a few weird kswapd storms where the kernel just seems to get confused about having 128G of ram and swap space. Machine was lagging very hard at odd times of the day until I just turned off swap. and if you have a caching RAID controller with battery backup then I'd add a line like this: echo noop > /sys/block/sda/queue/scheduler for every RAID drive you have. Any other scheduler really just gets in the way of a good caching RAID controller. There's also some parameters that affect how fast dirty caches are written out by the OS, worth looking into, but they made no big difference on my 128G 48 core 34 15krpm drive system. If you do a lot of inserts / updates / deletes then look at making vacuum more aggressive. Also look at making the bgwriter a bit more aggressive and cranking up the timeout and having lots of checkpoint segments. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance