Hi, On 2019-04-11 15:39:15 -0400, Jeff Janes wrote: > But I don't think I would recommend starting at 25% of RAM larger server. > Is that really good advice? I would usually start out at 1GB even if the > server has 128GB, and increase it only if there was evidence it needed to > be increased. Due to double buffering between shared_buffers and OS cache, > 25% seems like a lot of wasted space. You need shared_buffers as a cooling > off tank where dirty data can wait for their corresponding WAL to get > flushed in the background before they get written out themselves. I think > 1GB is enough for this, even if you have 128GB of RAM. That runs very much contrary to my experience. If you actually gets writes into your cluster, having a small shared buffers will create a vastly larger amount of total writes. Because everytime a page is evicted from shared buffers, it'll shortly afterwards be written out to disk by the OS. Whereas that would not happen in shared buffers. Due to checkpoint sorting (~9.6?) writes from checkpointer are also vastly more efficient than either bgwriter triggered, or backend triggered writes, because it's much more likely that the OS / IO stack will write combine them. I think with the exception of workloads that have a lot of trunctions (e.g. tests that create/drop schemas) that are slow due to the implied shared buffer scan, a lot of the problems with large shared buffers have been fixed. Far from perfect, of course (i.e. the double buffering youmention). Greetings, Andres Freund