Am 29. Juni 2017 16:19:41 MESZ schrieb Willy-Bas Loos <willybas@xxxxxxxxx>: >Hi, > >We have a postgresql database that is now 1.4TB in disksize and slowly >growing. >In the past, we've had (read) performance trouble with this database >and >the solution was to buy a server that can fit the db into memory. It >had >0.5 TB of RAM and at the time it could hold all of the data easily. >Those servers are now old and the db has outgrown the RAM and we are >doing >more reads and writes too (but the problem has not yet returned). > >So i am looking into buying new servers. I'm thinking of equipping it >with >1TB of RAM and room to expand. So the database will not fit completely, >but >largely anyway. Also, if we can afford it, it will have SSDs instead of >RAID10 SAS spindles. > >But I've read that there is some kind of maximum to the shared_buffers, >where increasing it would actually decrease performance. >Is 1TB of RAM, or even 2TB always a good thing? >And is there anything special that I should look out for when >configuring >such a server? >Or would it be much better to buy 2 smaller servers and tie them >together >somehow? (partitioning, replication, ...) With current versions you can set shared buffers to, for instance, 40% of ram, no problem. Tune also the checkpointer. Regards, Andreas. -- 2ndQuadrant - The PostgreSQL Support Company -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general