On 2017-07-31 11:02, Alex Samad wrote:
Hi
I am using pg_dump | psql to transfer data from my old 9.2 psql into a
9.6 psql.
The new DB server is setup as master replicating to a hot standby
server.
What I have noticed is that the rows don't get replicated over until
the copy from stdin is finished... hard to test when you have M+ lines
of rows.
If you are "just testing" then you could use the COPY command
https://www.postgresql.org/docs/9.2/static/sql-copy.html
to generate a smaller dataset.
Is there a way to tell the master to replicate earlier
I highly doubt it, because the master cannot know what to replicate
until
your transaction is ended with a COMMIT. If you end with ROLLBACK,
or your last query is DELETE FROM (your_table>;
then there isn't even anything to replicate at all...
or is there a
way to get pg_dump to bundle into say 100K rows at a time ?
I'm not aware of such a feature, it would be quite tricky because
of dependencies between records. You cannot simply dump the first 100k
rows
from table A and the first 100k from table B, because row #9 from table
A
may have a relation to row 100.001 from table B.
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general