I've upgraded to 9.2.4. The problem still persists. It consumed 10 Gb of
RAM in 5 minutes and still grows. The dump file did not appear.
On 01.10.2013 14:04, Jov wrote:
Try update to the latest release,I see there is a bug fix about pg_dump
out of memroy in 9.2.2,from the release note
http://www.postgresql.org/docs/devel/static/release-9-2-2.html:
*
Work around unportable behavior of malloc(0) and realloc(NULL,
0) (Tom Lane)
On platforms where these calls return NULL, some code mistakenly
thought that meant out-of-memory. This is known to have broken
pg_dump for databases containing no user-defined aggregates. There
might be other cases as well.
Jov
blog: http:amutu.com/blog <http://amutu.com/blog>
2013/10/1 Sergey Klochkov <klochkov@xxxxxxxxx <mailto:klochkov@xxxxxxxxx>>
Hello All,
While trying to backup a database of relatively modest size (160 Gb)
I ran into the following issue:
When I run
$ pg_dump -f /path/to/mydb.dmp -C -Z 9 mydb
File /path/to/mydb.dmp does not appear (yes, I've checked
permissions and so on). pg_dump just begins to consume memory until
it eats up all avaliable RAM (96 Gb total on server, >64 Gb
available) and is killed by the oom killer.
According to pg_stat_activity, pg_dump runs the following query
SELECT oid, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid =
lomowner) AS rolname, lomacl FROM pg_largeobject_metadata
until it is killed.
strace shows that pg_dump is constantly reading a large amount of
data from a UNIX socket. I suspect that it is the result of the
above query.
There are >300000000 large objects in the database. Please don't ask
me why.
I tried googling on this, and found mentions of pg_dump being killed
by oom killer, but I failed to find anything related to the huge
large objects number.
Is there any method of working around this issue?
Thanks in advance.
OS: CentOS 6
PostgreSQL version: 9.2.1
96 Gb RAM
PostgreSQL configuration:
listen_addresses = '*' # what IP address(es) to listen on;
port = 5432 # (change requires restart)
max_connections = 500 # (change requires restart)
shared_buffers = 16GB # min 128kB
temp_buffers = 64MB # min 800kB
work_mem = 512MB # min 64kB
maintenance_work_mem = 30000MB # min 1MB
checkpoint_segments = 70 # in logfile segments, min
1, 16MB each
effective_cache_size = 50000MB
logging_collector = on # Enable capturing of stderr
and csvlog
log_directory = 'pg_log' # directory where log files
are written,
log_filename = 'postgresql-%a.log' # log file name pattern,
log_truncate_on_rotation = on # If on, an existing log
file of the
log_rotation_age = 1d # Automatic rotation of
logfiles will
log_rotation_size = 0 # Automatic rotation of
logfiles will
log_min_duration_statement = 5000
log_line_prefix = '%t' # special values:
autovacuum = on # Enable autovacuum
subprocess? 'on'
log_autovacuum_min_duration = 0 # -1 disables, 0 logs all
actions and
autovacuum_max_workers = 5 # max number of autovacuum
subprocesses
autovacuum_naptime = 5s # time between autovacuum runs
autovacuum_vacuum_threshold = 25 # min number of row updates
before
autovacuum_vacuum_scale_factor = 0.1 # fraction of table size
before vacuum
autovacuum_vacuum_cost_delay = 7ms # default vacuum cost delay for
autovacuum_vacuum_cost_limit = 1500 # default vacuum cost limit for
datestyle = 'iso, dmy'
lc_monetary = 'ru_RU.UTF-8' # locale for
monetary formatting
lc_numeric = 'ru_RU.UTF-8' # locale for number
formatting
lc_time = 'ru_RU.UTF-8' # locale for time
formatting
default_text_search_config = 'pg_catalog.russian'
--
Sergey Klochkov
klochkov@xxxxxxxxx <mailto:klochkov@xxxxxxxxx>
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx
<mailto:pgsql-admin@xxxxxxxxxxxxxx>)
To make changes to your subscription:
http://www.postgresql.org/__mailpref/pgsql-admin
<http://www.postgresql.org/mailpref/pgsql-admin>
--
Sergey Klochkov
klochkov@xxxxxxxxx
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin