On Wed, 21 May 2014 21:39:05 +0700 Stuart Bishop <stuart@xxxxxxxxxxxxxxxx> wrote: > > I've got some boxes with 128GB of RAM and up to 750 connections, just > upgraded to 9.3 so I'm revising my tuning. I'm getting a > recommendation from pgtune to bump my shared_buffers up to 30GB and > work_mem to 80MB. Is a shared_buffers this high now sane? > > The PostgreSQL reference doesn't make recommendations on limits, but > it didn't either with earlier versions of PostgreSQL where more than a > few GB was normally a bad thing to do. The most recent blob posts I > see mentioning 9.3 and modern RAM sizes still seem to cap it at 8GB. > > (and yes, I am using pgbouncer but stuck in session mode and up to 750 > connections for the time being) My experience with a busy database server over the last year or so demonstrated that values much _higher_ than that result in occasional stalls on the part of PostgreSQL. My guess is that the code that manages shared_buffers doesn't scale effectively to 64G (which is where we saw the problem) and would occasionally stall waiting for some part of the code to rearrange some memory, or write it to disk, or something else. Other tuning attempts did not alleviate the problem (such as tweaking various checkpoint settings) but the problem completely disappeared when we lower shared_buffers to (I think) 32G. Unfortunatley, I don't have access to exact details because I no longer work at that job, so I'm just pulling from memory. We never did get an opportunity to test whether there was any performance change from 64G -> 32G. I can tell you that if performance decreased, it didn't decrease enough for it to be noticable from the application. So my advice is that 30G might be just fine for shared_buffers, but if you experience stalls (i.e., the database stops responding for an uncomfortably long time) keep that in mind and lower it to see if it fixes the stalls. Another important data point when considering this: we never experienced any crashes or errors with shared_buffers set at 64G ... just the stalls, so setting it too high appears to endanger performance, but nothing else. A bit of advice coming from the other direction: shared_buffers doesn't really need to be any larger than the working set of your data. If you can estimate that, and (for example) it's only 4G, you don't need to set shared_buffers nearly that high, even if you have 4T of total data. Of course, estimating your working set can be difficult, but it's worth a look. -- Bill Moran <wmoran@xxxxxxxxxxxxxxxxx>