Hi Jeff. Thanks for the clarification. I'll adjust wal_keep_segments for the expected biggest table in the backup. Best regards, Mads From: Jeff Janes <jeff.janes@xxxxxxxxx> To: "Mads.Tandrup@xxxxxxxxxxxxxxxxxxxxxx" <Mads.Tandrup@xxxxxxxxxxxxxxxxxxxxxx>, Cc: Albe Laurenz <laurenz.albe@xxxxxxxxxx>, "pgsql-general@xxxxxxxxxxxxxx" <pgsql-general@xxxxxxxxxxxxxx> Date: 06-06-2013 18:33 Subject: Re: Streaming replication with sync slave, but disconnects due to missing WAL segments Sent by: pgsql-general-owner@xxxxxxxxxxxxxx On Wed, Jun 5, 2013 at 11:26 PM, <Mads.Tandrup@xxxxxxxxxxxxxxxxxxxxxx> wrote: Hi Thanks for your reply. Do you know of any options that I could give pg_dump/psql to avoid creating one big transaction? I'm using the plain text format for pg_dump. For the plain text format, it is already not one big transaction, unless you specify to -1 to the psql. However, the load of any individual table will still be a single transaction, so for a very large table it will still be a very long transaction. Using pg_dump for --inserts could get around this, but it would probably be better to fix the fundamental problem by increasing wal_keep_segments or something of that nature. Cheers, Jeff ______________________________________________________________________ This email has been scanned by the Symantec Email Security.cloud service. ______________________________________________________________________ -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general