Search Postgresql Archives

Re: Tuning / performance questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Bryan Montgomery wrote:

> We have a system, that due to recent events is getting a lot
> heavier use. The application makes extensive use of functions.
> These functions would typically run sub-second but now can take
> several seconds.

> The server is running on suse 11.4 with 8 vcpu and 32Gb ram on a
> virtual machine.
> 
> Running pg_version returns 'PostgreSQL 9.1.0 on
> x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux) 4.5.1
> 20101208 [gcc-4_5-branch revision 167585], 64-bit' and select
> pg_size_pretty(pg_database_size('nrgdb')); returns 63 GB.

> The server typically has up to 500 connections with a max of 750
> connections.

> max_connections = 750 # (change requires restart)

> temp_buffers = 64MB # min 800kB

> work_mem = 512MB # min 64kB

750 * ((512 MB) + (64 MB)) = 421.87500 gigabytes

Once a connection uses memory for temp_buffers, it doesn't release it
for as long as that connection runs. Each connection can allocate
work_mem for each node of a query plan. Even though maximum usage can
be more than one allocation per connection, rule of thumb is to
assume just one. You are up to needing 422 GB + shared memory + OS
space (including some buffers and cache) + whatever else you run on
this VM. You have 32 GB. You will almost certainly have problems at
high load.

Try putting pgbouncer in front of the database, configured to use
transaction mode and accept 750 user connections while only keeping
20 or so database conections open.

> max_prepared_transactions = 250 # can be 0 or more

Are you really using two phase commit and a transaction manager?
(Don't confuse this setting with something related to prepared
*statements* -- prepared *transacitons* are a separate issue.)  Even
if you are using prepared transactions, do you really expect your
transaction manager to let 250 transactions pile up in the database
between the first and second phase of commit?

BTW, you should be monitoring this table for old prepared
transactions that to prevent problems with bloat.

> fsync = off
> full_page_writes = off

You didn't mention your backup scheme, but be prepared for the fact
that with these settings, if the VM (or its underlying OS or
hardward) fails, your database will be corrupted and you may have no
choice but to use your backup.

> commit_delay = 1000 # range 0-100000, in microseconds
> 
> commit_siblings = 5 # range 1-1000

These settings are notoriously hard to configure from the default
without actually making things worse. Be very sure you know what you
are doing and have carefully benchmarked this against your real
workload; otherwise it is probably better to put these back to the
defaults.

There may be some other fine-tuning opportunities, but these issues
should be fixed first, and it would be best to have an actual query
that is performing poorly to try to tune some of the other settings.

-Kevin


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux