On 29/06/2017 17:19, Willy-Bas Loos
wrote:
Hi,
We have a postgresql database that is now 1.4TB in
disksize and slowly growing.
In the past, we've had (read) performance trouble with
this database and the solution was to buy a server
that can fit the db into memory. It had 0.5 TB of RAM
and at the time it could hold all of the data easily.
Those servers are now old and the db has outgrown the
RAM and we are doing more reads and writes too (but the
problem has not yet returned).
So i am looking into buying new servers. I'm thinking of
equipping it with 1TB of RAM and room to expand. So the
database will not fit completely, but largely anyway.
Also, if we can afford it, it will have SSDs instead of
RAID10 SAS spindles.
But I've read that there is some kind of maximum to the
shared_buffers, where increasing it would actually decrease
performance.
Is 1TB of RAM, or even 2TB always a good thing?
And is there anything special that I should look out for when
configuring such a server?
Or would it be much better to buy 2 smaller servers and tie them
together somehow? (partitioning, replication, ...)
Our DB is also on the 1T+ range. It is hosted in a cloud VM, with
only 32GB RAM but ultra fast SSD disks. No problems.
IIRC the "fit DB into RAM" was a trend many years back. The new
recommendation for shared buffers is about 25% of RAM. Leaving the
rest to be utilized mainly by the kernel cache, also by other
programs in the system.
--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt
|