Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Maybe you can performe your database changing some parameters properly:

PostgreSQL configuration:

listen_addresses = '*'          # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 500                   # (change requires restart)
Set it to 100, the highest value supported by PostgreSQL
shared_buffers = 16GB                  # min 128kB
This value should not be higher than 8GB
temp_buffers = 64MB                     # min 800kB
work_mem = 512MB                        # min 64kB
maintenance_work_mem = 30000MB          # min 1MB
Given RAM 96GB, you could set it up to 4800MB
checkpoint_segments = 70 # in logfile segments, min 1, 16MB each
effective_cache_size = 50000MB
Given RAM 96GB, you could set it up to 80GB


Hope it can help.

Giuseppe.

--
Giuseppe Broccolo - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
giuseppe.broccolo@xxxxxxxxxxxxxx | www.2ndQuadrant.it



--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux