Re: PostgreSQL 9.2 - pg_dump out of memory when backuping a database with 300000000 large objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No, it did not make any difference. And after looking through pg_dump.c and pg_dump_sort.c, I cannot tell how it possibly could. See the stacktrace that I've sent to the list.

Thanks.

On 01.10.2013 15:01, Giuseppe Broccolo wrote:
Maybe you can performe your database changing some parameters properly:

PostgreSQL configuration:

listen_addresses = '*'          # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 500                   # (change requires restart)
Set it to 100, the highest value supported by PostgreSQL
shared_buffers = 16GB                  # min 128kB
This value should not be higher than 8GB
temp_buffers = 64MB                     # min 800kB
work_mem = 512MB                        # min 64kB
maintenance_work_mem = 30000MB          # min 1MB
Given RAM 96GB, you could set it up to 4800MB
checkpoint_segments = 70                # in logfile segments, min 1,
16MB each
effective_cache_size = 50000MB
Given RAM 96GB, you could set it up to 80GB


Hope it can help.

Giuseppe.


--
Sergey Klochkov
klochkov@xxxxxxxxx


--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux