On Tue, Aug 1, 2017 at 4:27 PM, Alex Samad <alex@xxxxxxxxxxxx> wrote: > Hi > > So just to go over what i have > > > server A (this is the original pgsql server 9.2) > > Server X and Server Y ... PGSQL 9.6 in a cluster - streaming replication > with hot standby. > > > I have 2 tables about 2.5T of diskspace. > > I want to get the date from A into X and X will replicate into Y. > > > I am currently on X using this command > > pg_dump -U <USER> -h <Server A > -t BIGTABLE -a <DB> | sudo -u postgres -i > psql -q <DB>; > > This is taking a long time, its been 2 days and I have xfered around 2T.. > This is just a test to see how long and to populate my new UAT env. so I > will have to do it again. > > Problem is time. the pg_dump process is single threaded. > I have 2 routers in between A and X but its 10G networking - but my network > graphs don't show much traffic. > > Server X is still in use, there are still records being inserted into the > tables. > > How can I make this faster. > > I could shutdown server A and present the disks to server X, could I load > this up in PGSQL and do a table to table copy - i presume this would be > faster ... is this possible ? how do I get around the same DB name ? > What other solutions do I have ? Yes, but if it's taking days to transfer 2TB then you need to investigate where your performance is tanking. Have you tried resyncing / scping files across the network to see how fast your network connection is? Have you tried just pg_dumping / restoring locally to get an idea how fast you can dump / restore withoout doing it over a network connection? Are you IO bound? Network bound? CPU bound? Is the destination copying data, or building indexes? Do you insert into a schema that already has indexes in place? If so have you tried dropping the indexes first and rebuilding them? -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general