On 2/15/2014 4:30 PM, Antman, Jason (CMG-Atlanta) wrote:
My current postgres instances for testing have 16GB shared_buffers (and 5MB work_mem, 24GB effective_cache_size). So if, hypothetically (to give a mathematically simple example), I have a host machine with 100GB RAM, I can't run 10 postgres instances with those settings, right? I'd still need to provide for the memory needs of each postgres server/instance separately?
does 16GB shared_buffers really give that much better performance than 2 or 4 GB for your application, under a development workload?
effective_cache_size is not an allocation, its just an estimate of how much system cache is likely to contain recently accessed postgres data, the planner uses it guess the cost of 'disk' accesses.
-- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general