On 8/9/19 9:51 AM, Shital A wrote:
On Fri, 9 Aug 2019, 21:25 Adrian Klaver, <adrian.klaver@xxxxxxxxxxx
<mailto:adrian.klaver@xxxxxxxxxxx>> wrote:
On 8/9/19 8:14 AM, Shital A wrote:
>
> Hello,
>
> 4) What techniques have you tried?
> Insert into with With statement, inserting 2000000 rows at a
time. This
> takes 40 mins.
>
To add to my previous post. If you already have data in a Postgres
database then you could do:
pg_dump -d db -t some_table -a -f test_data.sql
That will dump the data only for the table in COPY format. Then you
could apply that to your test database(after TRUNCATE on table,
assuming
you want to start fresh):
psql -d test_db -f test_data.sql
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>
Thanks for the reply Adrian.
Missed one requirement. Will these methods generate wal logs needed for
replication?
For COPY, AFAIK yes.
To verify set up a small test table and COPY to it and sees if the data
shows up on the standby.
pg_bulkload:
https://ossc-db.github.io/pg_bulkload/pg_bulkload.html
"IMPORTANT NOTE: Under streaming replication environment, pg_bulkload
does not work properly. See here for details.
Actually the data is to check if replication catches up. Below is scenario :
1. Have a master slave cluster with replication setup
2. Kill master so that standby takes over. We are using pacemaker for
auto failure.
Insert 1 GB data in new master while replication is broken.
3 Start oldnode as standby and check if 1GB data gets replicated.
As such testing might be frequent we needed to spend minimum time in
generating data.
Master slave are in same network.
Thanks !
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx