On Thu, Dec 11, 2014 at 10:30 AM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
needed to hold relcache entries for all 23000 tables :-(. If so there
may not be any easy way around it, except perhaps replicating subsets
of the tables. Unless you can boost the memory available to the backend
I'd suggest this. Break up your replication into something like 50 sets of 500 tables each, then add one at a time to replication, merging it into the main set. Something like this:
create & replicate set 1.
create & replicate set 2.
merge 2 into 1.
create & replicate set 3.
merge 3 into 1.
repeat until done. this can be scripted.
Given you got about 50% done before it failed, maybe even 4 sets of 6000 tables each may work out.