BDR is currently memory-limited for extremely large transactions. At a guess, I'd say one of your big tables is large enough that the logical decoding facility BDR uses can't keep track of the transaction properly. There's no hard limit, it depends on details of the transaction and a number of other variables, but "many tens or hundreds of GB" is generally too much. If I was to load such a big DB, I'd probably do it with ETL tools that could split up the load and do it progressively. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general