On Fri, Oct 28, 2016 at 10:44 AM, Warner, Gary, Jr <gar@xxxxxxx> wrote: > I've recently been blessed to move one of my databases onto a > huge IBM P8 computer. Its a power PC architecture with 20 8-way > cores (so postgres SHOULD believe there are 160 cores available) > and 1 TB of RAM. > So . . . what would I want to do differently based on the fact > that I have a "very high memory system"? What OS are you looking at? The first advice I would give is to use a very recent version of both the OS and PostgreSQL. Such large machines are a recent enough phenomenon that older software is not likely to be optimized to perform well on it. For similar reasons, be sure to stay up to date with minor releases of both. If the OS has support for them, you probably want to become familiar with these commands: numactl --hardware lscpu You may want to benchmark different options, but I suspect that you will see better performance by putting each database on a separate cluster and using cpusets (or the equivalent) so that each cluster uses a subset of the 160 cores and the RAM directly attached to the subset. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance