Re: [PATCH] postgresql_server / db-koji01: Adjust a bunch more for performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 10 Oct 2019 at 12:02, <kevin@xxxxxxxxx> wrote:
>
> From: Kevin Fenzi <kevin@xxxxxxxxx>
>
> The current settings cause database dumps to drive the load way up
> and make the entire application slow, so we need to adjust.
> Using pgtune, these values might well be better.
>
> shared_buffers + effective_cache_size should = total memory.
> random_page_cost should be lowered a bunch since we are on ssds there.
> 1.1 is only slightly more than 1.0 for sequential.
> effective_io_concurrency should also be raised a bunch for ssds.
> a few other values should be higher based on memory.
>
> Signed-off-by: Kevin Fenzi <kevin@xxxxxxxxx>
> ---
>  inventory/host_vars/db-koji01.phx2.fedoraproject.org |  4 ++--
>  roles/postgresql_server/templates/postgresql.conf    | 12 +++++++-----
>  2 files changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/inventory/host_vars/db-koji01.phx2.fedoraproject.org b/inventory/host_vars/db-koji01.phx2.fedoraproject.org
> index 49a19d8..587e561 100644
> --- a/inventory/host_vars/db-koji01.phx2.fedoraproject.org
> +++ b/inventory/host_vars/db-koji01.phx2.fedoraproject.org
> @@ -45,5 +45,5 @@ nrpe_procs_warn: 600
>  nrpe_procs_crit: 700
>
>  db_backup_dir: ['/backups']
> -shared_buffers: "32GB"
> -effective_cache_size: "32GB"
> +shared_buffers: "30GB"
> +effective_cache_size: "90GB"
> diff --git a/roles/postgresql_server/templates/postgresql.conf b/roles/postgresql_server/templates/postgresql.conf
> index b148442..cbaaaae 100644
> --- a/roles/postgresql_server/templates/postgresql.conf
> +++ b/roles/postgresql_server/templates/postgresql.conf
> @@ -121,8 +121,8 @@ shared_buffers = {{ shared_buffers }}   # min 128kB or max_connections*16kB
>
>  # 8 MB is probably on the high side.  We can probably do with 4MB.  But we
>  # were seeing a problem and we have the RAM so we're going to try this.
> -work_mem = 4MB                          # min 64kB
> -maintenance_work_mem = 1024MB           # min 1MB
> +work_mem = 157286kB
> +maintenance_work_mem = 2GB              # min 1MB
>  #max_stack_depth = 2MB                  # min 100kB
>
>  # - Free Space Map -
> @@ -169,7 +169,7 @@ wal_sync_method = fdatasync             # the default is the first option
>                                          #   fsync_writethrough
>                                          #   open_sync
>  #full_page_writes = on                  # recover from partial page writes
> -wal_buffers = 64kB                      # min 32kB
> +wal_buffers = 16MB                      # min 32kB
>                                          # (change requires restart)
>  #wal_writer_delay = 200ms               # 1-10000 milliseconds
>
> @@ -212,7 +212,7 @@ checkpoint_warning = 180s               # 0 is off
>  # - Planner Cost Constants -
>
>  #seq_page_cost = 1.0                    # measured on an arbitrary scale
> -random_page_cost = 3.0                  # same scale as above
> +random_page_cost = 1.1                  # same scale as above
>  #cpu_tuple_cost = 0.01                  # same scale as above
>  #cpu_index_tuple_cost = 0.005           # same scale as above
>  #cpu_operator_cost = 0.0025             # same scale as above
> @@ -502,4 +502,6 @@ default_text_search_config = 'pg_catalog.english'
>  #------------------------------------------------------------------------------
>
>  #custom_variable_classes = ''           # list of custom variable class names
> -
> +#
> +# Number of concurrent i/o operations at the same time. The default is 1.
> +effective_io_concurrency = 100

OK I think we have a problem with the above value. the db servers
seems to be running at a load average of over 140 consistently and we
have been getting timeouts and problems with nagios. I think we should
look at dropping that to 32 and see if that causes the load average to
lower. While the ssd's are much faster, they do not seem to have the
cache and it looks like IO is backing up.


-- 
Stephen J Smoogen.
_______________________________________________
infrastructure mailing list -- infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe send an email to infrastructure-leave@xxxxxxxxxxxxxxxxxxxxxxx
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/infrastructure@xxxxxxxxxxxxxxxxxxxxxxx




[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux