There is no problem dumping large tables using parallel dump. My script had limit on the file size that was causing parallel dump to abort on large tables.
Thanks everyone for their valuable suggestion. Thanks shanker From: Shanker Singh
I tried dumping the largest table that is having problem using –j1 flag in parallel dump. This time I got error on the console “File size limit exceeded” but
the system allows Unlimited file size. Also the pg_dump without –j flag goes through fine. Do you guys know what’s going on with parallel dump? The system is 64 bit centos( 2.6.32-504.8.1.el6.x86_64 #1 SMP Wed Jan 28 21:11:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux) with ext4 file system. limit cputime unlimited filesize unlimited datasize unlimited stacksize 10240 kbytes coredumpsize 0 kbytes memoryuse unlimited vmemoryuse unlimited descriptors 25000 memorylocked 64 kbytes maxproc 1024 From: Sterfield [mailto:sterfield@xxxxxxxxx]
2015-02-20 14:26 GMT-08:00 Shanker Singh <ssingh@xxxxxxx>: I tried turning off ssl renegotiation by setting "ssl_renegotiation_limit = 0" in postgresql.conf but it had no effect. The parallel dump still fails on large tables consistently. HI, Maybe you could try to setup an SSH connection between the two servers, with a keepalive option, and left it open for a long time (at least the duration of your backup), just to test if your ssh connection is
still being cut after some time. That way, you will be sure if the problem is related to SSH or related to Postgresql. Thanks, Guillaume
|