On Wed, Apr 2, 2014 at 11:38:57AM +0200, Alexey Klyukin wrote: > In most cases 8GB should be enough even for the servers with hundreds of GB of > data, since the FS uses the rest of the memory as a cache (make sure you give a > hint to the planner on how much memory is left for this with the > effective_cache_size), but the exact answer is a matter of performance testing. > > Now, the last question would be what was the initial justification for the 8GB > barrier, I've heard that there were a lock congestion when dealing with huge > pool of buffers, but I think that was fixed even in the pre-9.0 era. The issue in earlier releases was the overhead of managing more then 1 million 8k buffers. I have not seen any recent tests to confirm that overhead is still significant. A larger issue is that going over 8GB doesn't help unless you are accessing more than 8GB of data in a short period of time. Add to that the problem if potentially dirtying all the buffers and flushing it to a now-smaller kernel buffer cache, and you can see why the 8GB limit is recommended. I do think this merits more testing against the current Postgres source code. -- Bruce Momjian <bruce@xxxxxxxxxx> http://momjian.us EnterpriseDB http://enterprisedb.com + Everyone has their own god. + -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance