On 07/27/2018 03:04 PM, Dimitri Maziuk wrote:
On 07/26/2018 07:11 PM, Adrian Klaver wrote:
On 07/26/2018 04:48 PM, Dimitri Maziuk wrote:
...
The publication foopub is at this point fubar I take it? And needs to be
re-created on the publisher and reconnected on the subscriber? Complete
with with inital resync?
Not sure. Personally I would try:
1) ALTER PUBLICATION DROP TABLE foo|bar;
2) ALTER PUBLICATION ADD TABLE foo|bar;
3) ALTER SUBSCRIPTION sub_name REFRESH PUBLICATION
If you get to 3) it will re-sync the data unless you tell it otherwise.
The above is probably dependent on the size of the publication. If you
did a publication for ALL it would make more sense to do the above then
if you did a publication for just foo or bar.
... but if I did the publication for ALL, I could just use streaming
replication and then drop table/add table would replicate automagically ...
Well I was just showing the extremes from a single table publication to
ALL tables. You can also do subsets of ALL. Remember that binary
replication(streaming) involves the whole Postgres cluster, e.g. all the
databases in the cluster no choice in the matter. Also it does not allow
you to shape what is replicated. In other words what form of DML you
want replicated e.g. UPDATE, INSERT, DELETE. Last but not least logical
replication works across major versions and different OS'es, which
binary replication does not.
It looks like we probably have to re-think a few of our workflows and
procedures, and until/unless that happens, logical replication won't do
what we want. Which means figuring out this 13-million-files problem
becomes a very low priority for me. Unfortunately: it'd be nice to track
it down and squash it...
Thanks everyone,
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx