On 15 November 2014 06:00, Mark Kirkwood <mark.kirkwood@xxxxxxxxxxxxxxx> wrote: > It is probably time to revisit this 8GB limit with some benchmarking. We > don't really have a hard and fast rule that is known to be correct, and that > makes Alexey's job really difficult. Informally folk (including myself at > times) have suggested: > > min(ram/4, 8GB) > > as the 'rule of thumb' for setting shared_buffers. However I was recently It would be nice to have more benchmarking and improve the rule of thumb. I do, however, believe this is orthogonal to fixing pgtune which I think should be using the current rule of thumb (which is overwhelmingly min(ram/4, 8GB) as you suggest). > benchmarking a machine with a lot of ram (1TB) and entirely SSD storage [1], > and that seemed quite happy with 50GB of shared buffers (better performance > than with 8GB). Now shared_buffers was not the variable we were > concentrating on so I didn't get too carried away and try much bigger than > about 100GB - but this seems like a good thing to come out with some numbers > for i.e pgbench read write and read only tps vs shared_buffers 1 -> 100 GB > in size. I've always thought the shared_buffers setting would need to factor in things like CPU speed and memory access, since the rational for the 8GB cap has always been the cost to scan the data structures. And the kernel would factor in too, since the PG specific algorithms are in competition with the generic OS algorithms. And size of the hot set, since this gets pinned in shared_buffers. Urgh, so many variables. -- Stuart Bishop <stuart@xxxxxxxxxxxxxxxx> http://www.stuartbishop.net/ -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance