Hi, On 2019-06-17 19:45:41 -0400, Jeff Janes wrote: > If not, I would set the value small (say, 8GB) and let the OS do the > heavy lifting of deciding what to keep in cache. FWIW, in my opinion this is not a good idea in most cases. E.g. linux's pagecache doesn't scale particularly gracefully to large amounts of data, and it's decisions when to evict data aren't really better than postgres'. And there's a significant potential for additional unnecessary disk writes (because the kernel will flush dirty pagecache buffers, and then we'll just re-issue many of those writes again). It's a bit hard to be specific without knowing the workload, but my guidance would be that if the data has some expected form of locality (e.g. index lookups etc, rather than just sequentially scanning the whole database) then sizing s_b for at least the amount of data likely to be repeatedly accessed can be quite beneficial. If increasing s_b can achieve that most writes can be issued by checkpointer rather than backends and bgwriter, the generated IO pattern is *far* superior since 9.6 (as checkpointer writes are sorted, whereas bgwriter/backend writes aren't to a meaningful degree). The one big exception is if the workload frequently needs to drop/truncate non-temporary tables. There we currently linearly need to search shared_buffers, which, although the constants are fairly small, obviously means that drop/truncations get noticably slower with a larger shared_buffers. - Andres