Re: Optimal configuration for server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 3/7/22 12:51, Luiz Felipph wrote:
> Hi everybody!
> 
> I have a big application running on premise. One of my main database
> servers has the following configuration:
> 
> 72 CPUs(2 chips, 18 physical cores per chip, 2 threads) Xeon Gold 6240
> 1TB of ram or 786GB (5 servers at all)
> A huge storage( I don't know for sure what kind is, but is very powerful)
> 
> A consulting company recommended the following configuration for theses
> main servers(let me know if something important was left behind):
> 
> maxx_connections = 2000
> shared_buffers = 32GB
> temp_buffers = 1024
> max_prepared_transactions = 3000
> work_men = 32MB
> effective_io_concurrency = 200
> max_worker_processes = 24
> checkpoint_timeout = 15min
> max_wal_size = 64GB
> min_wall_size = 2GB
> effective_cache_size = 96GB
> (...)
> 
> I Think this is too low memory setting for de size of server... The
> number of connections, I'm still measuring to reduce this value( I think
> it's too high for the needs of application, but untill hit a value too
> high to justfy any memory issue, I think is not a problem)
> 

Hard to judge, not knowing your workload. We don't know what information
was provided to the consulting company, you'll have to ask them for
justification of the values they recommended.

I'd say it looks OK, but max_connections/max_prepared_transactions are
rather high, considering you only have 72 threads. But it depends ...

> My current problem:
> 
> under heavyload, i'm getting "connection closed" on the application
> level(java-jdbc, jboss ds)
> 

Most likely a java/jboss connection pool config. The database won't just
arbitrarily close connections (unless there are timeouts set, but you
haven't included any such info).

> The server never spikes more the 200GB of used ram(that's why I thing
> the configuration is too low)
> 

Unlikely. If needed, the system would use memory for page cache, to
cache filesystem data. So most likely this is due to the database not
being large enough to need more memory.

You're optimizing the wrong thing - the goal is not to use as much
memory as possible. The goal is to give good performance given the
available amount of memory.

You need to monitor shared buffers cache hit rate (from pg_stat_database
view) - if that's low, increase shared buffers. Then monitor and tune
slow queries - if a slow query benefits from higher work_mem values, do
increase that value. It's nonsense to just increase the parameters to
consume more memory.


regards

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux