Hi folks!! I support a PostgreSQL database it grew from 100 to 300 GB in couple months, I we need tuning all OS parameters (is a virtual enviroment). Currently the configuration is the follow: Virtual Machine Ubuntu 18.04 12 CPUS 8 GB in RAM 400 in Disk Clúster database size 320 GB Some queries have times too differents, the same query sometimes it execute in 8 10 seconds sometimes in 12 15 minutes, many querys make joins between 40, 50 millions rows. I know some factors like relation between entities affect times, but in general I suspect the mainly problem si resources in OS. I suspect is a lack to memory ram but we need justify the change in OS RAM, so, How to calculate how too much memory ram need a PostgreSQL Server? Too need know how because the database size continue grows. Best Regards DrakoRod
IMHO you need to profile the workload (e.g. "Explain analyze" on the queries in question), look at the indices in use and whether the planner can be helped with additional ones on the tables and then look at the size of said indices and how all of this fits into the RAM you have and what storage is behind it ( RAM -> SSD -> spinning rust.)
When the working set of the indices required for the queries in
question (not so much the data itself) get outside of the
available RAM space performance gets hit fast since any sort of
storage is a lot slower, with spinning rust being excruciatingly
slower since it has seek time penalties that SSDs do not.
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature